Saturday, May 30, 2015

journey to the center of the laptop

The last time I described how ideas from my software career shaped my present thinking, the topic was the interdependency between the meanings of data and code. The effective meaning of data was rooted in the details of "information systems" behind it: purposeful sequences of computer code and human labor to methodically record it, construct it, augment it, alter it, mix it with more data, etc. But the same observation could be reversed: the effective meaning (and correctness) of the information system was no more than its demonstrated transformations of data.

This viewpoint appeared to apply in other domains as well. For a wide range of candidate concepts, probing the equivalent of the concept's supporting "information system" usefully sifted its detectable meaning. How did the concept originally arise? How could the concept's definitions, verifications, and interpretations be (or fail to be) repeated and rechecked? Prospective data was discarded if it didn't have satisfactory answers to these questions; should pompous concepts face lower standards?

However, not all the software ideas were at the scale of information systems. Some knowledge illuminated the running of a single laptop. For instance, where does the laptop's computation happen? Where's the site of its mysterious data alchemy? What's the core of its "thinking"—with the precondition that this loaded term is applied purely in the loose, informal, metaphorical sense? (Note that the following will rely on simplified technological generalizations too...) The natural course of investigation is from the outside in.
  • To start with, probably everyone who regularly uses a laptop would say that the thinking takes place inside the unit. The ports around its edges for connecting audio, video, networking, generic devices, etc. are optional. These connections are great for enabling additional options to transport information to and from the laptop, but they don't enable the laptop to think. The exceptions are the battery slot and/or the power jack, which are nonetheless only providers of the raw energy consumed by the laptop's thinking.
  • Similarly, it doesn't require technical training to presume that the laptop's screen, speakers, keyboard, touchpad, camera, etc. aren't where the laptop thinks. The screen may shut off to save power. The speakers may be muted. The keyboard and touchpad are replaceable methods to detect and report the user's motions. Although these accessible inputs and outputs are vital to the user's experience of the laptop, their functions are like translation rather than thinking. Either the user's actions are transported to the laptop's innards as streams of impulses, or the final outcomes of the laptop's thinking are transported back out to the user's senses. 
  • Consequently, the interior is a more promising space to look. Encased in the walls of the laptop, under the keyboard, behind the speakers, is a meticulously assembled collection of incredibly flat and thin parts. Some common kinds of parts are temporary memory (RAM), permanent storage (internal drives), disc drives (CD,DVD,Blu-Ray), wireless networking (WiFi). By design this group receives, holds, and sends information. Information is transported but not thought about. So the thinking must occur in the component that's on the opposite side of this group's diverse attachments: the main board or motherboard.
  • To accommodate and manage the previously mentioned external ports and internal parts, the motherboard is loaded with hierarchical circuitry. It's like a mass of interconnected highways or conveyor belts. Signals travel in from the port or part, reach a hub, proceed to a later hub, and so forth. As a speedy rest stop for long-running work in progress, the temporary memory is a frequent start or end. The intricacy of contemporary device links ensure that motherboards are both busy and sophisticated, yet once more the overall task is unglamorous transportation. There's a further clue for continuing the search for thinking, though. For these transportation requests to be orderly and appropriate, the requests' source has to be the laptop's thinking. That source is the central processing unit (CPU).
  • Analysis of the CPU risks a rapid slide into complexity and the specifics of individual models. At an abstract level, the CPU is divided into separate sections with designated roles. One is loading individual instructions for execution. Another is breaking down those instructions into elemental activities of actual CPU sections. A few out of many categories of these numerous elemental activities are rudimentary mathematical operations, comparisons, copying sets of bits (binary digits, either zeros or ones) among distinct areas in the CPU's working memory, rewriting which instruction is next, and dispatching sets of bits in and out of the CPU. In any case, the sections' productive cooperation consists of transporting bits from section to section at the proper times. Again setting aside mere transporting, the remaining hideout for the laptop's thinking is somewhere inside those specialized CPU sections completing the assigned elemental activities.
  • Also considered at an abstract level, these CPU sections in turn are built from myriad tiny "gates": electronics organized to produce differing results depending on differing combinations of electricity flowing in. For example, an "AND" gate earns its name through emitting an "on" electric current when the gate's first entry point AND the second have "on" currents. Odd as it may sound, various gates ingeniously laid out, end to end and side by side, can perfectly perform the elemental activities of CPU sections. All that's demanded is that the information has consistent binary (bit) representations, which map directly onto the gates' notions of off and on. The elemental activities are performed on the information as the matching electric currents are transported through the gates. And since thinking is vastly more intriguing than dull transportation of information in any form, the hunt through the laptop needs to advance from gates to...um...er...uh... 
This expedition was predictably doomed from the beginning. Peering deeper doesn't uncover a sharp break between "thinking" and conducting bits in complicated intersecting routes. No, the impression of thought is generated via algorithms, which are engineered arrangements of such routes. The spectacular whole isn't discredited by its unremarkable pieces. Valuable qualities can "emerge" from a cluster of pieces that don't have the quality in isolation. In fact, emergent qualities are ubiquitous, unmagical, and important. Singular carbon atoms don't reproduce, but carbon-based life does.

Ultimately, greater comprehension forces the recognition that the laptop's version of thinking is an emergent quality. Information processing isn't the accomplishment of a miraculous segment of it; it's more like the total collaborative effect of its abundant unremarkable segments. An outsider might scoff that "adding enough stupid things together yields something smart", but an insider grasps that the way those stupid things are "added" together makes a huge difference.

Readers can likely guess the conclusion: this understanding prepares someone to contemplate that all versions of thinking could be emergent qualities. Just as the paths in the laptop were the essence of its information processing, what if the paths in creatures' brains were the essence of their information processing? Laptops don't have a particular segment that supplies the "spark" of intelligence, so what if creatures' brains don't either? Admittedly, it's possible to escape by objecting that creatures' brains are, in some unspecified manner, fundamentally unlike everything else made of matter, but that exception seems suspiciously self-serving for a creature to propose... 

Sunday, May 17, 2015

MUSHy identity, mission, and investment

Recent blog entries have analyzed why beliefs can thrive in the face of missing or lackluster corroboration...particularly forms of corroboration that are inherently unrepeatable, infeasible, or illogical. Such beliefs can persist indefinitely in the midst of a self-reinforcing group of followers, who are extremely Serious. In a number of instructive ways, the overall result eerily echoes a MUSH: an early internet example of a group of hobbyists maintaining and experiencing a fictional realm together.

For instance, the group itself collectively acts as the sole measurement of "accuracy" for the content. And the deficiency in outside corroboration doesn't reduce the genuinely satisfying mental benefits furnished by the combined contributions of the group and the content. Human culture contains a multitude of comparisons (the internet alone never ceases facilitating far-flung topical communities). A MUSH is especially suitable because it's small, mild, and uncomplicated. It's less predominant and ambitious than its Serious counterparts, yet the telling similarities show how MUSHy those can be.

As already mentioned, mental benefits attract individuals to the group and the content and then keep their attention thereafter. Extracting even more commitment demands the potent incentive of deep-seated connections. One of those effective connections is identity. Once again, a MUSH presents harmless, miniaturized manifestations of the phenomenon. In this case, its content offers lists of fanciful identities for the participants to "inhabit"...during the times when they're logged in and typing. Whatever the size of the role they play, they become familiar with filling it. They know it completely and comfortably. They're adapted to meeting its expectations. Abandoning the MUSH requires abandoning that accustomed (and entertaining) portion of their identity.

The molding of members' identities is typical of Serious groups too, but to a larger extent, of course. Unlike a MUSH identity, perhaps their Serious identity is supposed to be constant, like a mask that can't ever be dropped. Also unlike a MUSH, they may spend the majority of their daily lives surrounded by the group. Details aside, once they have firmly embedded their group role into their personal identities, nonchalant challenges to the content appear as aggressive challenges to pieces of them. When reevaluating an idea entails potentially breaking their identity, they have an excuse to avoid going through with it.

Other kinds of connections complement the connection to identity: a mission is an excellent second kind. Whereas handing over an identity clarifies whom someone is to be, handing over a mission further clarifies what they are to do. An unambiguous mission offers manifold gratifications. Minor or intermediate goals are chances for short-term victory, and simultaneously, grand or virtually unattainable goals are endless reasons to continue striving. With a mission, group members can see a need for their exertions—and themselves. Their actions have weight and direction. They can feel that they're accomplishing valuable progress. They're part of making circumstances "better"...as defined by the mission objective, at any rate. They're an asset to the team, so giving up would be a disappointing, selfish betrayal of the team's trust. The "missions" pursued during long-term in-depth MUSH participation probably sound uncaptivating at first, but the basic allure shouldn't be underestimated. An amplifying cycle is at work: an intensely felt connection has the effect of motivating greater involvement, and steadily expanding involvement has the effect of intensifying the feeling of connection.

Obviously, missions of all shapes and sizes are pervasive in Serious groups as well. Although repetitive passive contemplation of the content gradually hardens it in followers' thoughts, missions dramatically animate it in their lives. By affecting group behavior directly, the content accrues exciting meaningfulness and importance. For more than a few, inspiring missions might engross them so much that almost all of the content itself is peripheral—inconsequential minutiae that's little more than outdated decoration.

Carrying out a mission implies investing toward it, but more generally, any investments are a third kind of enduring connections to the group and content. Likelier than not, financial investment is only one of several types. In practice the rest might be more burdensome: sacrifices of time, effort, and competing opportunities. The common thread is that each investment raises the stakes. Thereafter, rejecting the group and the content has the expensive price of affirming that these formerly promising trade-offs were worthless all along.

For a MUSH, this risk isn't terribly chilling. Still, it does sway the decision to come back again and again. The time-consuming events and deeds of past sessions are partially intended to set the stage for intriguing future sessions; to walk away and never use that "stage" seems wasteful. Just by investing enough in something, no matter how little it is, the investor transfers a sentiment of ownership onto it. They acquire a well-founded interest in its fate. Additionally, active group socializing is an "investment", though that mercenary mindset is best avoided. Initiating enjoyable group interactions increases the perceived value of the group...for more interactions. After someone has given themselves into opening and preserving reliable companionship, including at shallow or casual levels, they're understandably reluctant to hastily discard it.

By contrast with these low-key instances, the routine investments associated with Serious groups are strict and obligatory. So this policy is plainly strategic, because forcing quick, large investments secures the loyalty of new followers. The more that they invest, the more determined they are to think that their venture is deserving. Moreover, before the investments are actually tried, an above average cost for the group and the content creates a pretense of above average worth to justify the cost. That pretense exploits the usual link from relative superiority to a relatively greater charge—good stuff usually isn't easy or free. On the other hand, the most savvy strategy of all might be a set of (maybe implicit) tiers: starting tiers stipulate low but non-zero investments, yet the group continually prods everyone to progress to tiers of greater investments.

Given the cumulative pull of these deep-seated connections, the relevant question isn't necessarily "How is it possible for Serious groups cluster around ideas with problematic corroboration and then sustain those ideas?", but "How could such groups possibly not exist when broadly similar elements apparently suffice for fueling decidedly unserious groups, like a MUSH that merely clusters around outlandish stories?"

Saturday, May 02, 2015

data : code :: concept : verification

I've sometimes mused about whether my eventual embrace of a Pragmatism-esque philosophy was inevitable. The ever-present danger in musings like this is ordinary hindsight bias: concealing the actual complexity after the fact with simple, tempting connections between present and past. I can't plausibly propose that the same connections would impart equal force on everyone else. In general, I can't rashly declare that everyone who shares one set of similarities with me is obligated to share other sets of similarities. Hastily viewing everyone else through the tiny lens of myself is egocentrism, not well-founded extrapolation.

For example, I admit I can't claim that my career in software development played an instrumental role in the switch. I know too many competent colleagues whose beliefs clash with mine. At the same time, a far different past career hasn't stopped individuals in the Clergy Project from eventually reaching congenial beliefs. Nevertheless, I can try to explain how some aspects of my specific career acted as clues that prepared and nudged me. My accustomed thought patterns within the vocational context seeped into my thought patterns within other contexts.

During education and on the job, I encountered the inseparable ties between data and code. Most obviously, the final data was the purpose of running the code (in games the final data was for immediately synthesizing a gameplay experience).  Almost as obvious, the code couldn't run without the data flowing into it. Superficially, in a single ideal program, code and data were easily distinguishable collaborators taking turns being perfect. Perhaps a data set went in, and a digest of statistical measurements came out, and the unseen code might have ran in a machine on the other side of the internet.

At a more detailed level of comprehension, and in messy and/or faulty projects cobbled together from several prior projects, that rosy view became less sensible. When final data was independently shown to be inaccurate, the initial cause was sometimes difficult to deduce. Along the bumpy journey to the rejected result, data flowed in and out of multiple avenues of code. Fortunately the result retained meaningfulness about the interwoven path of data and code that led to it, regardless of its regrettable lack of meaningfulness in regard to its intended purpose. It authentically represented a problem with that path. Thus its externally checked mistakenness didn't in the least reduce its value for pinpointing and resolving that path's problems.

That wasn't all. The reasoning applied to flawless final data as well, which achieved two kinds of meaningfulness. Its success gave it metaphorical meaningfulness in regard to satisfying the intended purpose. But it too had the same kind of meaningfulness as flawed final data: literal meaningfulness about the path that led to it. It was still the engineered aftereffect of a busy model built out of moving components of data and code—a model ultimately made of highly organized currents of electricity. It was a symbolic record of that model's craftsmanship. Its accurate metaphorical meaning didn't erase its concrete roots.

The next stage of broadening the understanding of models was to incorporate humans as components—exceedingly sophisticated and self-guiding components. They often introduced the starting data or reviewed the ultimate computations. On top of that, they were naturally able to handle the chaotic decisions and exceptions that would require a lot more effort to perform with brittle code. Of course the downside was that their improvisations could derail the data. Occasionally, the core of an error was a human operator's unnoticed carelessness filling in a pivotal element two steps ago. Or a human's assumptions for interpreting the data were inconsistent with the assumptions used to design the code they were operating.

In this sense, humans and code had analogous roles in the model. Each were involved in carrying out cooperative series of orderly procedures on source data and leaving discernible traces in the final data. The quality of the final data could be no better than the quality of the procedures (and the source data). A model this huge was more apt to have labels such as "business process" or "information system", abbreviated IS. Cumulatively, the procedures of the complete IS acted as elaborations, conversions, analyses, summations, etc. of the source data. Not only was the final data meaningful for inferring the procedures behind it, but the procedures in turn produced greater meaningfulness for the source data. Meanwhile, they were futilely empty, motionless, and untested without the presence of data.

Summing up, data and code/procedures were mutually meaningful throughout software development. As mystifying as computers appeared to the uninitiated, data didn't really materialize from nothing. Truth be told, if it ever did so, it would arouse well-justified suspicion about its degree of accuracy. "Where was this figure drawn from?" "Who knows, it was found lying on the doorstep one morning." Long and fruitful exposure to this generalization invited speculation of its limits. What if strict semantic linking between data and procedures weren't confined to the domain of IS concepts?

A possible counterpoint was repeating that these systems were useful but also deliberately limited and refined models of complex realities. Other domains of concepts were too dissimilar. Then...what were those unbridgeable differences, exactly? What were the majority of beneficial concepts, other than useful but also deliberately limited and refined models? What were the majority of the thoughts and actions to verify a concept, other than procedures to detect the characteristic signs of the alleged concept? What were the majority of lines of argument, other than abstract procedures ready to be reran? What were the majority of secondary cross-checks, other than alternative procedures for obtaining equivalent data? What were the majority of serious criticisms to a concept, other than criticisms of the procedures justifying it? What were the majority of definitions, other than procedures to position and orient a concept among other known concepts?

For all that, it wasn't that rare for these other domains to contain some lofty concepts that were said to be beyond question. These were the kind whose untouchable accuracy was said to spring from a source apart from every last form of human thought and activity. Translated into the IS perspective, these were demanding treatment like "constants" or "invariants": small, circular truisms in the style of "September is month 9" and "Clients have one bill per time period". In practice, some constants might need to change from time to time, but those changes weren't generated via the IS. These reliable factors/rules/regularities furnished a self-consistent base for predictable IS behavior.

Ergo, worthwhile constants never received and continually contributed. They were unaffected by data and procedures yet were extensively influential anyway. They probably had frequent, notable consequences elsewhere in the IS. Taken as a whole, those system consequences strongly hinted the constants at work—including tacit constants never recognized by the very makers of the system. Like following trails of breadcrumbs, with enough meticulous observation, the backward bond from the system consequences to the constants could be as certain as the backward bond from data to procedures.

In other words, on the minimal condition that the constants tangibly mattered to the data and procedures of the IS, they yielded accountable expectations for the outcomes and/or the running of the IS. The principle was more profound when it was reversed: total absence of accountable expectations suggested that the correlated constant itself was either absent or at most immaterial. It had no pertinence to the system. Designers wishing to conserve time and effort would be advised to ignore it altogether. It belonged in the routine category "out of system scope". By analogy, if a concept in a domain besides IS declined the usual methods to be reasonably verified, and distinctive effects of it weren't identifiable in the course of reasonably verifying anything else, then it corresponded to neither data nor constants. Its corresponding status was out of system scope; it didn't merit the cost of tracking or integrating it.

As already stated, the analogy wasn't undeniable nor unique. It didn't compel anyone with IS expertise to reapply it to miscellaneous domains, and expertise in numerous fields could lead to comparable analogies. There was a theoretical physical case for granting it wide relevance, though. If real things were made of matter (or closely interconnected to things made of matter), then real things could be sufficiently represented with sufficient quantities of the data describing that matter. If matter was sufficiently represented, including the matter around it, then the ensuing changes of the matter were describable with mathematical relationships and thereby calculable through the appropriate procedures. The domain of real things qualified as an IS...an immense IS of unmanageable depth which couldn't be fully modeled, much less duplicated, by a separate IS feasibly constructed by humans.