Tuesday, November 24, 2015

the tedium of the chase

In the ongoing project to explore exactly how those who "should" know better continue to follow faith-beliefs—including me in the first years of my life—a central yet underreported explanation is the habitual unbending momentum of belief. Along with that are actual rationales they may have for either not seeking out conflicting information or for offhandedly discounting the information's implications. One is the sanguine confidence that all unearthed discrepancies from the faith-beliefs are superficial and temporary. I know that it's sincerely professed by individuals of superb intelligence, education, inquisitiveness, etc. It's probably rampant in religious liberal arts colleges/universities.

Given that the faith-beliefs are unalterable, the presumption is that every human method of investigation will eventually bridge the gap (or disprove it). In effect, knowingly or not, directly or not, those methods are said to be chasing the faith-beliefs. This chase's secondary value is a veneer of unworried participation in numerous secular fields. Provided that any chosen field can be tenuously embraced as a path to the same old set of faith-beliefs, then careers in those fields aren't frightening temptations. Especially bold followers might further boast that the picture of a "chase toward the One Truth" unifies everything in a tidy manner. And they have a valid point, from their perspective. Why is it unreasonable to them that, sooner or later, the fullness of human knowledge would seamlessly mesh with the ideas which they promote as ultimately fundamental?

When speaking to followers of faith-beliefs, I've previously discouraged referring to a total war between science and religion/spirituality and proceeding to demand an exclusive switch of their "loyalty". I'd rather nudge the thoughtful reexamination of the nature of verification and the standards that are acceptable before following an idea. But I'm not enthused by the effort to replace the war with a chase. For critics, its basic problem is easy to notice. If the chase is indeed happening...then it's turning out pitifully dull. It's not nail-bitingly close. For whatever reason, the pursuers aren't advancing. Worse, they're falling farther behind. With each passing decade, the overall trend is of increasing intervals. It's almost as if the pursuers are headed in the wrong direction.

I'm alluding primarily to examples of the empirical sciences not converging to a singular system of faith-beliefs, although needless to say the humanities aren't either. I realize not all faith-beliefs are identical, but very few of the top ones seem to have proposed an extremely old (and gargantuan) universe, containing an off-center, relatively younger yet very old planet Earth, where species that are approximately human have lived for a minute fraction of that time range. Very few seem to have proposed the common genetic ancestry among organisms from large to small, observable in embarrassingly similar genetic code. Very few seem to have proposed the continual failure of ubiquitous modern-day recording devices to ever capture unambiguous supernatural occurrences. Very few seem to have proposed that the idea of a soul is unnecessary to the study of the basis of human behavior and consciousness. Very few seem to have proposed the germ theory of disease (i.e. not an evil spirit theory of disease).

The attempts to disqualify these examples are less than satisfying, too. If these are disqualified because the topics aren't patently religious, then there isn't a chase at all: without even the faintest of overlaps in topics, the sciences can't be chasing the faith-beliefs in the first place. If these are disqualified because of how much time has passed since the introduction of the faith-beliefs, then the chase has stopped already and the chase doesn't help to address present issues. If these are disqualified because the original communicators of the faith-beliefs weren't privy to modern terminology and concepts, then that would mean measuring progress in the chase will always be vulnerable to inconclusive decoding of the intended future meanings of their culturally limited past communications. If these are disqualified because the chase is only expected to yield nonspecific subjective benefits such as greater inspirational appreciation of the "creator's mind", then the chase itself adds less value; it doesn't promise an end to concrete discrepancies. If these are disqualified because every discovery that clashes with the faith-beliefs is immediately assumed to be a mistake in one way or another, then the chase is a worthless pretense; the compliant remnant amounts to a mere shoe print of the faith-beliefs, not a discrete confirmation.

More likely than not, followers could invent more justifications to keep the hope of the chase workable, like conspiracy theorists who can swiftly digest incompatible data and arguments. Just as I can't utterly prove nonexistence from absence, I can't utterly establish that all varieties of human inquiry will never vindicate someone's faith-beliefs. I can state that I don't have nearly enough faith to suppose it will happen—my own adaptation of the half-baked quip, "I don't have enough faith to be an atheist". However, the suggestive trend is that the whole pack of pursuers is giving a horrible performance...and the performance is strikingly coordinated. A dawdling minority doesn't wreck the chase. But when a majority is teaming up to dawdle, the chase becomes tiresomely drawn-out. Evidently the group is more than lagging behind, it's lagging behind in lockstep. The increasing distance from the "target" isn't accompanied by increasing distance among the pack. In this chase, to be remote from this target isn't to be an unusual outlier; to be the outlier is to be the target.

At a high level, I'm pleased by the admission of the chasm that separates the content of beliefs that require faith and the content of beliefs that don't. I'm less pleased by the nominated remedy that the chasm will shut by itself someday (so in the meantime the chasm signifies nothing).

Thursday, November 19, 2015

It's a Good Life...of belief in objects by "choice"

A monster had arrived in the village. Just by using his mind, he took away the automobiles, the electricity, the machines - because they displeased him - and he moved an entire community back into the dark ages - just by using his mind. [...] Oh yes, I did forget something, didn't I? I forgot to introduce you to the monster. This is the monster. His name is Anthony Fremont. He's six years old, with a cute little-boy face and blue, guileless eyes. But when those eyes look at you, you'd better start thinking happy thoughts, because the mind behind them is absolutely in charge.  --"It's a Good Life", The Twilight Zone
The Twilight Zone had a memorable episode in which a character could remake everything around him with a thought. Unfortunately, as indicated by the quote above, this character was six. The results were...disturbing. This staggering ability has appeared in a large number of stories across media; TV Tropes maintains a long list of comparable examples under the description "Reality Warper".

Looking back now, I've noticed that it has a subtle application to the religious views that I eventually discarded. It was connected to a pair of precepts. The first was repeated often and dramatically: choice of belief had severe moral stakes. Choosing to believe the correct ideas was an essential duty. Guiding everyone to do the same was an official mission of mercy, because the afterlife of all who had chosen beliefs well would be infinitely preferable to the afterlife of those who chose poorly. For belief to warrant that degree of judgment, it had to consist of self-aware, willful choices.

The pair's second precept was that the objects described by the beliefs were categorically genuine. The beliefs weren't solely metaphorical. Unlike a daydream or a fondness for cake, the beliefs weren't all about the believer, i.e. the subject. The topics of the beliefs were objective and external. Accordingly, the beliefs' objects certainly weren't hallucinations encased in the subject's thoughts. Nor would the objects' features vary by the subject's individual perspective.  

Separately applied, these two precepts are commonplace. It's not peculiar to in effect punish someone for the despicable beliefs they've chosen to adopt—but never until after they've translated the beliefs into harmful deeds. Similarly, it's not peculiar for the content of beliefs to be about objects beyond than the believer. But in combination, the pair formed the frankly bizarre prescription of assigning moral blame based on beliefs about objects which the subject didn't control. The subtext was that regardless of the objects' independent existence and effects, believing in the objects was somehow a grave, voluntary decision which was the responsibility of the subject.

Normally, from an evenhanded subject's viewpoint, statements about physical objects have unequal "believability". That authentic believability is built on the successes or failures of ordinary methods: observations, deductions, tight inferences, calculations. But to fairly hold their pure willpower accountable for those objects' believability presupposes that their willpower itself must be capable of increasing object believability. When someone is in a forthright state of unbelief about the objects, faulting them for flaws in their related logical reasoning is more pertinent than just faulting them for not thinking or acting more as if the objects are believable. If, on its own, the subject's striving to envision and feel with greater intensity is honestly expected to significantly boost the objects' believability, then the subject possesses amazing psychokinetic powers. Essentially, if it's appropriate to chastise them for not trying hard enough to metaphysically readjust the level of detection, then their mistake must be that they aren't properly industrious Reality Warpers.

Despite how strange this sentiment sounds, a disguised form of it slowed my progress out of my former views. It wasn't verbalized, but it was present. For approximately a four year period, I would've contended that my ominous doubts about the plausibility of some of the core parts of my religious view didn't weaken my preexisting choice to keep believing in other core parts. I was in the group of split-minded followers that's frequently overlooked. I was perpetually restless, because I was full of chronic doubts and clinging to the original commitment anyway. I'd been taught the virtuousness of deliberately insisting on the accuracy of a set of supernatural "objective facts" come what may. In the midst of that courageous endeavor, missing or highly questionable proofs weren't excuses but rewarding challenges: it was more admirable to adhere to these mysterious facts without whining for corroboration. If their advice were paraphrased to be less self-flattering then it would declare, "It doesn't matter if the expected indications of these facts don't show up in experience. What matters is if you nonetheless sternly command these notions to be really most sincerely factual."

I confess that this characterization is sarcastic. I'm expressing my current amusement when I hear the glib recommendation to respond to lackluster substantiation by doggedly "believing more" in an object's realism. At the time I didn't picture myself psychically fortifying the objective believability of supernatural pronouncements. Rather I embodied it in the structure of my past viewpoint. There were sorted and sealed layers, each one more prone to shifting and unreliability than the one below it. The consciously chosen remaining core parts of my religious beliefs were in the bottom layer. My thoughts, steady but changeable, were in the next layer up. On top was the layer of the ordinary methods listed earlier.

I was ordered to align my middle layer of thoughts with the bedrock layer of supernatural deep Truth, not with the deceptive, shallow, surface layer of material events. If my thoughts were feeling shaky, then my obligation was to forcibly re-anchor my thoughts in immovable doctrines. I was squashing my wavering estimations against beliefs that I treated as more "objective" than fallible objects. I felt that I was diligently reemphasizing, not wholly generating, the shadowy supernatural objects.

Before information could possibly topple this stack, I needed a philosophical refinement of my inconsistent definitions of belief. I needed to quit settling for a stubborn belief by choice. A handful of piercing questions triggered the avalanche. Why did the supernatural domain deserve permanent residence in an unshakable bottom layer? More to the point, why were there distracting layers of protection? Why inject exceptional complexity into assessing the accuracy of candidate supernatural objects? Why was belief in those objects recast as a grueling, praiseworthy, premeditated selection...instead of the spontaneous and undeniable aftereffect of showing/explaining persuasive objective support? Why were us followers instructed to begin with belief and only later scrunch information into the belief's shape as needed, which was the exact reverse of the conventional procedure? Why didn't everyone who investigated my views end up in total agreement when they started from scratch?

Imagine if you will an absurd analogy from one of the many domains besides than the supernatural. Generally, if someone wants to convince a companion that snow is falling outdoors, they urge looking through a window, opening the door a crack to peek out, checking which month it is, etc. They don't say, in an echo of the paraphrase from earlier, "Forget using your eyes or reasoning to evaluate whether the snow falling outdoors happens to be believable to you. No, your mandate is to concentrate intently about a sudden snowfall. Believe me, that shall suffice."

Monday, November 09, 2015

veer not

Last time, I asserted that the high values I placed on thinking didn't shift during my progression from religious to atheistic views. On either side of that boundary, I highly valued the serious pondering of beliefs and presuppositions before acting on them. I highly valued that convincingly separating reality from unreality requires more work than hazy intuitions and fleeting goosebumps. I highly valued that credible beliefs should be backed by coherent explanations whenever someone asks, though for religious statements the typical explanations were meticulously chosen excerpts of sacred texts.

Moreover, I highly valued my rejections of rival positions. I rejected that a statement and its logical opposite could be true at once—unless the statements have differing, limited scopes, which would also imply the two aren't in fact opposites. I rejected that everyone is equally qualified to offer opinions on all topics or that they may be in conflict yet all be "right". I rejected that (hypothetical) supernatural stuff, unlike everything else, could have a bizarre or fluctuating status between real and unreal, just because the corresponding statements were so vague and varied.

Most vitally, according to what I was taught, I continued valuing the worth of faith-beliefs in proportion to the amount of accuracy; our ideas mattered because our ideas were definitive. But with broader experience I've recognized the obvious point that not all those who identify with faith-beliefs necessarily "believe" in that sense. Their ancestral legends/traditions can have openly acknowledged inaccuracy and nevertheless supply substantial emotional/societal rewards. They may opt to fruitfully reinterpret the original symbolisms for their contemporary tastes and needs.

However, by asserting that I supposedly held all these values from the start, I'm inviting a frank retort: why weren't my views forced to change sooner? In general, how does someone with these values remain committed to faith-beliefs? One complex answer is sociological: the ability of a group to have a mutually-defining, mutually-strengthening relationship with a set of ideas. A second complex answer is psychological: the strategy of spliting the self into a smoothly organized team composed of the part that needs to sincerely believe and the part that continually contrives ways to satisfy and preserve that need. A third complex answer is philosophical: the doubt that undirected evolution would produce a trustworthy brain. And the answers go on and on from there. All this agrees with the common principle that greater intelligence is often used to invent dazzling ways of being comfortably wrong, i.e. denials and rationalizations.  

Complex answers are intriguing to catalog and analyze. Yet the whole collection is a distraction from the foremost factor that kept my views, and countless others', from shifting. It's less emphasized in debates because it's transparent, rudimentary, and indefensible: inertia. I don't refer to "inertia" as a metaphor for the absence of motion but for the unforced tendency or habit of never deviating from a predetermined path—nor contemplating the possibility of it. As I'm referring to it, inertia doesn't even represent the active effort to counteract disruptions. It's the momentum of not observing any disruptions in the first place. It's following a faith-belief today due to following it yesterday, not due to compelling reasons or superiority over fairly compared alternatives.

Inertia is an exceedingly gentle form of deprivation. It never indicates the restrictions that it's imposing by default. It doesn't hint at the noteworthy information that isn't being sought. It doesn't disturb long-term confidence and contentment. It doesn't force confrontations with any opponent or opposing viewpoint. It doesn't break expectations or promises. It doesn't ask challenging questions.

It may be indirectly encouraged in followers through more pleasing ideals: faithfulness, persistence, single-mindedness, devotion. It may be reinforced through the monotonous lifestyle connected to, and constructed around, the faith-beliefs. By comprehensively complying in every aspect of their behavior, the follower reduces if not eliminates the odds of colliding with surprise contradictions to their settled faith-beliefs. They talk to the same individuals, go to the same destinations, expend their leisure time with the same activities, consume the same categories of media...

...including reading the same internet sites and books. Thus the success or failure of atheistic sites/books to reach such followers is inseparable from inertia's level of influence. The followers who have enough interest to read these resources are the subset who aren't as bound by inertia. Opening the site/book is a sign that they're the sort with questing, lively styles of thought: they might not be persuaded, but mere inertia doesn't isolate them from unfamiliar standpoints. The flipside is that the tougher followers to reach are also the subset who didn't think of trying these resources, or feel any motivation to. It's akin to a visitor who chooses to attend a religious service: they might not be impressed to come back, but they're evidently willing to hear the service, unlike everyone who didn't visit.

The stubborn obstacle of inertia is one of many considerations in the perennial clashes over the "best" tone for these resources. Some endorse an antagonistic and accusatory tone toward all manifestations of faith-beliefs, while some endorse a conciliatory and sympathetic tone. Unfortunately, neither is a universal antidote to followers' inertia. The first has the advantage of achieving recognition through controversy, as well as provoking bewildered followers to wonder "What problems could they possibly see in my views that could cause that reaction?" In contrast, the second has the advantage of appearing less daunting, thereby attracting followers who wish to gingerly explore their doubts without the threat of feeling attacked. In other words, the first can potentially shake and penetrate inertia, and the second can potentially dodge and placate it. A range of strategies might succeed in getting past followers' indifference to mulling over the weaknesses of their views.

I've previously noted that the second was better suited to my ambivalent shift in views. My own inertia wasn't radically deflected at all. Like a long ship, I wouldn't see that my route veered. The curve out of the shadow was steady and slow. I didn't seek atheistic resources until after I had stumbled onto contradictions that were meaningful to me. I didn't feel an urge to painfully dissect my personal convictions. Without intending to undermine my faith-beliefs, I learned, thought, and lived. Then, I kept reevaluating which ideas still could fit, on account of my aforementioned intellectual values. The progression wasn't always calculated or methodical. It was prolonged and tentative. I didn't want to step off a ledge; I preferred to, uh, rappel if I could. If I'd applied my values into more vigorous research and more fearless self-interrogation, I would've dislodged my old views more quickly. Inertia didn't stop me from turning away from my faith-beliefs, but it propelled me into a longer arc.

Sunday, November 01, 2015

the trials of scopes

To the uninitiated, the claim to have superior reverence for Reason may sound as presumptuous and self-congratulatory as Libertarians claiming to have superior reverence for Liberty. Way back in 2012, when I was first getting acquainted with atheists on the internet and with organized atheistic "movements" of any kind, I balked a little at their penchant for publicly designating Reason as a distinguishing feature of their views. My shift from religious to atheistic didn't reflect that. Although the outcomes of my thoughts had greatly changed, my appreciation of logical deliberation had never changed throughout. I had always embraced the importance of closely examining ideas. I know now that my former view had an undercurrent of awful deficiencies, but blatant anti-intellectualism wasn't one. If it had been, would I have been as attentive to the counterarguments that ultimately persuaded me?

Later, I realized my misunderstanding. In these contexts, "Reason"—not to mention "Rationalist"—represented an assorted category of commendable methods to gather and test knowledge. And a major need met through introducing a distinct category, no matter its choice of name, was to specifically set up an opposite to the flawed method of faith (and the unrecommended methods of gullibility, ungrounded conjecture, and the substitution of wishes in place of facts). According to the wording they chose when they were advocating greater respect for Reason, I saw that they were in general agreement with my philosophies. In their more succinct language, like me they were stressing the need for carefully gauging ideas' amount of meaning/accuracy according to the quality of the verification of those ideas' implications. As usual, abstracted labels don't expose similarities and conflicts as effectively as unsummarized particulars. In my experience, likelier than not their views shared a lot with mine: my absence of belief in gods, and the broad outlines of the best ways to handle ideas, and the stance that the ideas of materialistic naturalism are the most probable fit for all the actual findings that have passed adequate standards.

Nevertheless, crisply invoking plain Reason as an explanation and/or justification has a small risk. It might entice followers of faith-beliefs into once again attempting to gloss over or rationalize the key differences in their beliefs' premises. They may suggest that materialistic naturalism is also a faith (requires the method of faith) or suggest that its notion of authority is equivalent. To that end, they're pleased to detect the faintest whiff of immaterial stuff in materialistic naturalism. It offers the immediate opportunity to pronounce that this view is relatively incomplete, hypocritical, illogical, etc. Then they may swiftly declare that anyone who holds to this view is forced to concede that Reason is their own version of an immaterial foundation; like them, their view is "really" dependent on the existence of something ghostly. At the same time, they certainly won't mention the meagerness of this concession. Even to concede the existence of a spiritual being entirely synonymous with Reason, such as a Cosmic Legislator behind the "laws" of nature, would still be foreign to the anthropomorphic content of the commoner faith-beliefs. It would be a relative of Deism's aloof god.

Fortunately, I don't define Reason in those disconnected phantasmal terms, and as stated earlier, I'm sure that I'm not alone in this. My longstanding position is that Reason's existence is embodied in countless concrete acts. These acts include work carried out by human brains, despite the apparent passivity of some acts like direct perception. (Actually transforming the sense organs' nerve impulses into usable information is a convoluted process that incorporates expectations and judgments.) Except metaphorically, I don't see Reason as a guide, an oracle, or an overlord. Reason is an ideal to measure against each act, one by one. It's largely about honoring consistency. Acts that are in accordance with Reason, such as cautious deductions and generalizations, are beneficial due to the reliable consistencies of encountered realities. Reasoners can confidently let causality be causality because the reasoned links between causes and effects echo these reliable consistencies—though the links aren't easily disentangled.

This prosaic depiction of Reason has an inherent cost. Since these acts occur in complex, unique, concrete contexts, the acts' results aren't necessarily applicable to other contexts. The importance of context shouldn't be underestimated. The analogous limitation in software development is known as the scope of an entity in software. For instance, the entity could be a single running total. By necessity, software code is broken up into manageable sections. To keep the meanings of entities predictable at all times, each entity is associated with a set scope of one or more code sections. It can be simply referenced, and possibly manipulated, only by the code sections in its scope. Of course, the section in which it is first introduced is its primary or native scope. Before an entity is manipulated in a separate scope, it must be explicitly exported there, or duplicated to a new entity there, or mixed into one or more of the preexisting entities there, or retrieved from across scopes through a completely descriptive longer version of its name (i.e. more like a full mailing address for the entity).

The relevant similarity is that just as the scope of each software entity doesn't automatically extend outside of the code section it came from, clear-sighted appraisal of an act of Reason recognizes that its correctness has a scope that doesn't automatically extend outside of the context that led to it. Additional contexts might be in the scope. But neither acts nor contexts are all alike, so there can't be generic rules for determining scopes. Indeed, the boundaries of the acts' scopes might be unknown without experimentation. Near a phase transition, changes in heat have smoothly changing effects...until the phase transition happens. The phase transition violates acts of Reason on either side of it. Neither do pure mathematics escape scopes. The interior angles of a triangle have a sum of 180°...but that could be false for Non-Euclidean geometries. The result of multiplication doesn't hinge on the order of the factors...but that could be false for matrices. Numerous realities remain appallingly unsympathetic to human longings for smooth simplicity.

That said, the relentless accumulation of human acts of Reason has established, and repeatedly confirmed, an array of conclusions with undeniably wide scopes: elements, particles, fundamental forces, energy conversions. These conclusions weren't spoken by a singular thing, entitled "Reason", which humans idolized and begged for answers. Acts were committed, in a lurching sequence, until these conclusions' wide scopes were surveyed.

The question of scope is complicated, but it's not inconsequential hair-splitting. It's exactly why commonplace anecdotes—testimonies—have low validity. On its own, the highly specific context of an anecdote shouldn't be accepted as a decisive source for acts of "Reason" with prematurely extensive scopes. The reasoner should at minimum have plausible arguments for assuming that to be the case. And perhaps they should postpone that assumption altogether until after more acts of mental study and physical investigation.

Lastly, if Reason is a collection of acts, with shrewdly scrutinized scopes, performed by humans, then it functions as more than a category or an ideal. It's a spreading cultural custom of confronting ideas' accuracy in credible ways. I absorbed this to some degree in my conventional upbringing in contemporary America, but the cultural customs of my faith-belief started out more dominant nonetheless. At first, these two groups of customs persisted in a sometimes restless truce that divided up the territory of my beliefs. The start of the end was when I permitted myself to candidly compare the substance of the two. I didn't exchange my loyalties for a new metaphysical master. There wasn't a shining light of irresistible Reason compelling me to abruptly embrace "self-evidently right" opinions on all issues. All that happened was the crucial decision to more fully pursue the customs of the culture with the stronger corroborations, regardless of where the other disagreed on the details uncovered. It wasn't my fault that a thorough commitment to the one, combined with increasing quantities of information, ended up eroding the whole structure of the other...

Monday, September 07, 2015

thank you for not smoking

The previous concept reapplied from software was the black box analysis technique. The technique metaphorically places something inside a black box, which signifies avoidance of direct scrutiny or even identification. The something's effects are examined instead, thereby circumventing the interference or the labor of knowing the something and its inner workings. The analysis proceeds through the factual details of various interactions between the something and its environment.

It's highly relevant to the goal of objective testing, because it avoids prejudices. The act of inspection is entangled in the inspector's slanted perspective, while black box tests compare clear-cut outcomes to uninfluenced expectations. If external outcomes don't satisfy neutral and sensible criteria then the something should be reevaluated, regardless of who/what it is and the characteristics it supposedly has within.

Beyond black boxes, the topic of testing software includes another broadly useful concept: smoke tests. These are rapid, shallow, preliminary, unmistakable checks that the software is minimally serviceable. The name comes from the analogy of activating electronic equipment and just seeing if it smokes. A smoke test of software runs the smallest tasks. Can it start? Can it locate the software that it teams up with? Can it load its configuration? Can it produce meaningful output at all?

No specialized expertise is necessary to notice that smoke tests are vital but also laughably inadequate. Since the software must pass much more rigorous tests, it's logical to question why smoke tests are worthwhile to perform more than once on the same software. However, the bare fact is that software seldom stays the same, especially in the middle of furious development. Thus the worth of smoke tests is more for quickly determining when a recent modification is gravely problematic. A malfunctioning smoke test implies the need to reconsider the recent modification and rectify it—probably very soon in order to prevent delays in the overall schedule.

The surprise is that smoke tests resemble a mental tactic that shows up in various informal philosophizing. Like software developers who screen their attempts with smoke tests and then promptly fix and retry in the event of failure, a follower of a belief may repeatedly rethink its specifics until it's acceptable according to the equivalent of a smoke test. In essence the follower has a prior commitment to a conclusion, which they purposely reshape so that it at least doesn't "smoke". This tactic greatly differs from carefully proposing a tentative claim after collecting well-founded corroboration. And it differs from the foundation of productive debate: the precondition that the debaters' arguments are like orderly chains from one step to the next, not like lumps of clay that continually transform to evade objections.

As might be expected, the smoke test tactic easily leads to persistent misunderstandings about aims. The unambitious aim of the tactic is a pruned belief that isn't flagrantly off-base, not a pristine belief that's most likely to be accurate. A few belief smoke tests are absurdity, contradiction with solidly established information, violation of common contemporary ethics, and so forth. (The changes might qualify as retcons.) Before they show the candor to concede that their aim is a treasured belief that isn't transparently wrong, rather than the novel belief that's plausibly right, they're mired in a loop of mending belief by trial and error.

They may justify the tactic by saying, "Of course I can't profess the most uncomplicated, unswerving variant of my belief. I know that variant can't be correct. It would be too [absurd, barbaric, intolerant, naive, infeasible, bizarre, self-contradictory]. I use my best understanding to strengthen the weak points that ring false. Doesn't everyone? Why's that a reason for criticism?"

This rationale is persuasive; to revise beliefs over time is no shortcoming. The telling difference is that everyone else isn't using the tactic on beliefs portrayed as complete, authoritative, correct, and self-supporting. It presents two issues in that case. First, why would the belief have been communicated in such a way that the recipients need to make fine-grained clarifications for the sake of succeeding at smoke tests—which are exceedingly basic, after all? Second, once someone has begun increasingly reworking the original belief to comply with their sense of reasonableness, when does the belief itself stop being a recognizable, beneficial contributor to the result? Is it not a bad sign when something requires numerous manual interventions, replacement of parts, and gentle handling, or else it swiftly proceeds to belch embarrassing smoke?

Sunday, August 02, 2015

black boxes blocking baseless bias

Considering the proportion of time filled by a full-time career, its thought patterns carve deep grooves. Hence the blog winds up with entries musing on the wider application of software patterns like, say, competing structures. In a software project, diverse structures of data and code could all be part of doable solutions. But the project allows only one solution. Not all of these structures have equal quality, so a competition is appropriate. Meanwhile in the philosophical domain and elsewhere, humans contrive diverse mental structures for the "project" of thinking and acting within their puzzling realities. And it shouldn't be verboten for these structures of uneven quality to fairly compete.

That's the toughest obstacle in practice: defining and applying legitimately equitable standards of comparison. Whenever evaluators have decided beforehand that the structures they endorse will be superior, then their tendency is to choose and distort the standards to assure it. The ones committed to candor readily admit this; even better, if they're confident then they welcome offers of separate reviews that will validate the credibility of their own.

Luckily, the ceaseless struggle to approach ideas with less partisanship has another pattern back in the technological domain. The common black box technique refers to analyzing something purely via the stuff entering and exiting it. Knowledge about the thing, and its contents, is excluded for whatever reason. Conceptually, the thing is hidden inside a black box with little holes for stuff to pass through. On a diagram, multiple arrows go to and from the box, but nothing is written in the box except its name. As a side note, a representation of a single, huge, unexamined thing containing miscellaneous parts, such as an external computer network, might have been drawn as a bumpy cloud to emphasize its vague "shape" and "size".

The black box analysis is simplified and undeniably easier to manage as a consequence. Sometimes, depending on the task, the thing's innards are mostly off-topic. To smoothly interact with the thing, the more crucial details are what agreed-upon stuff will come out (or occur) after agreed-upon stuff goes in. Without condensed black box abstractions, the modern industrial age of specialized, interchangeable technology would be infeasible. Everyone would need to know an excessive amount about the individually complex pieces merely to construct a functioning whole. This is an equally essential ingredient of software. With published protocols and data formats, software can handle other software as black box peers which accept and emit lucid messages. Broad classes of compliant software can profitably cooperate.

Overall, an extensive black box description is invaluable for something that's largely unknown—by design or by circumstance. In contrast, the value of a black box description for something that's largely known is less intuitive. It hinges on the recognition that too-close familiarity with something might build a deceptive or incomplete impression of its satisfactoriness. When the something is software, it's only logical that its industrious writer is unaware of its oversights, else they wouldn't have written the oversights. At writing time, they may have framed their solution too narrowly to enclose the project's range of subtleties. Later, their ongoing frame goes on to prevent them from imagining tests capable of exposing the cramped, inadequate boundaries.

Mistakes of oversight are rivaled by occasionally embarrassing mistakes of "transcription": the writer failed to faithfully encode their original intent. They wanted to read memory location Q but they wrote code that reads J. Once again, it's only logical that such mistakes wouldn't survive if the writer's own firsthand experience caught every gaffe they introduced. They may have been distracted. Depressingly often, disorganization gradually accumulates in the code segment. Or, in a less forgivable offense, it's confusingly expressed from the outset. As a result, although they're staring directly at a mistake, they're distracted by the onerous strain of deciphering and tracking the bigger picture.

Less specifically, the sizable value of black box analysis for a largely known something lies in cross-checking the fallible judgments of "insiders" about that something. Placing it in a black box counteracts the hypothetical shortcomings of the insiders' entanglement. It includes putting aside comprehensive information about something's unique identity and full set of characteristics, and putting aside other connections/relationships, and putting aside appeal/repulsiveness. It's the candid, untainted estimation of whether the something's observed "footprints" match levelheaded expectations in pertinent contexts. The writer's admirable pride of craftsmanship doesn't attest that the supposedly finished software unit operates acceptably in all probable cases.

This practice's basic features are visible throughout innumerable domains, though with varied titles. It chimes with the "blinding" of subjects in experiments and surveys of customer tastes. (In an unusually palpable manifestation of the metaphor, part of the blinding procedure might employ nondescript, opaque boxes.) Blinding forces them to assess the sample with the sole attribute they can sense. From their view, the sample's source is in a black box. A second example is the services of an editor. They can approve and/or modify sections of a draft document according to the unprepared reactions it elicits in them. Unlike the submitter, they aren't an "expert" at knowing what it's meant to convey. They don't feel the submitter's strong sentimental attachments. They have a greater chance of encountering the draft itself. Where the editor is concerned, the labor behind it doesn't affect their revisions. The draft came out of a black box.

A third example is in the same theme, albeit more cerebral. It's the strategy of, after a long session of work on a preliminary creation, reserving time away before revisiting it. In the heart of the session, the creation is summarizing a portion of the creator's stream of consciousness. Therefore the contemporaneous brain activity grants them the perfect ability to effortlessly compensate for the creation's ambiguities and awkward aspects. To them in that instant, the creation's "seamless" substance and beauty are impossible to miss. When they return, their brain's state isn't enmeshed with the creation. They take a fresh look at its pluses and minuses in isolation. This is akin to the advice of not transforming a late-night brainstorm into irreversible actions until pondering it next morning. Interestingly, the something in the black box is the past configuration of the brain currently reexperiencing that past configuration's product, i.e. the creation/brainstorm. The critical difference is that the product isn't rubber-stamped due to where it came from—whose brain it rippled out of. No, the caliber of the product discloses the worthiness of whatever produced it, in this instance a past brain configuration. (It might be uncomplimentary. "My brain was really mesmerized by that tangent, but this is unusable nonsense.")

Despite its encouragingly widespread and timeless scope, black box-style thinking is a supplemental tool with inherent limits. It's for temporarily redirecting attention to the external symptoms of something's presence. Its visual counterpart is a sketch of a silhouette. It doesn't capture something's essence. It's not an explanation; on its own, a lengthy historical listing doesn't reliably predict responses to novel situations.

The epitome of an area dominated by these caveats is human conduct. Without question, the brain's convoluted character precludes painless black box analysis for rigorously unraveling how it runs. It exhibits context-dependent overrides of overrides of overrides...or it might not. Trends discovered during good tempers may have little relation to bad tempers. Or mannerisms connected to one social group may have little relation to mannerisms connected to contrasting social groups. Or a stranger switches among several conscious (or unconscious) guises, aimed at selectively steering the verdicts of unacquainted onlookers. The stranger is in a black box to the onlookers. The guises are collections of faked signals chosen to misinform the onlookers' analysis of that black box.

Caveats notwithstanding, entire societies heavily regulate members through black boxes of human conduct. (As a popular song from the early nineties famously didn't proclaim, "Here we are now: in containers.") Members are efficiently pigeonholed by unsophisticated facts about their deeds. In the society, facts of that type serve as decisive announcements of the member's inner nature. So, members who wish to be seen a certain way are obliged to adhere to the linked mandates. No extra particulars about them are accounted for. For this purpose, they're in a black box. It appears callous at first glance, but it exemplifies the earlier statements about the value of shortcuts for working with something that's largely unknown. When societies reach massive scales, it's impossible for members to obtain penetrating awareness of every other member. Like before, black box understandings ease interactions with scarce information about either party, becauses the pair can foresee what will transpire between them.

Furthermore, black box analysis of human conduct shares the advantages stated earlier for inspecting something that's largely known. The effectiveness is lowered by the caveats of this area but not eliminated altogether. It's more than adequate for imposing sharp, sensible thresholds on other findings. "If I didn't know them as well as I do, and they acted the same as they have in the situations I know of, would there be a disparity in how I esteem them? If there is, do I have a well-founded excuse for it? At some point, my firmest convictions about who they are should be aligning to some degree with their acts...."

Sunday, July 19, 2015

sit a spell in the Silver Chair

And this time it didn't come into her head that she [Jill] was being enchanted, for now the magic was in its full strength; and of course, the more enchanted you get, the more certain you feel that you are not enchanted at all.
My impression is that The Silver Chair is typically placed in the lower tier of the Chronicles of Narnia series. Part of the reason may be the unsettling villainess, namely the witch in green. Her alluring appearance and mostly-cordial composure are only masks. Like her realm of Underland, greedy malevolence lies under the mild surface. She's a patient schemer whose impulse is to work in secret.

Truth be told, plainly she doesn't need clumsy, aggressive threats of force to achieve domination. Her well-suited style of bewitching magic is psychologically manipulative and overpowering. Why would she wastefully assault her enemies' bodies when she can either mislead them or infiltrate their souls, thereby seducing them to defeat themselves? To seize Narnia's Prince Rilian, she doesn't overwhelm him with a contingent of fighters. She gradually fascinates him. She captivates him to make him a captive.

Similarly, his ensuing entrapment in Underland doesn't involve violence or intimidation. His magically mediated self-betrayal progresses to a worse stage due to regularly scheduled sessions bound in the witch's potent Silver Chair. In essence he's no longer himself. He's shifted into a complete second persona. His former memories, motivations, and disposition are displaced. Throughout the day, Rilian's mentality is confined so masterfully that he's mostly oblivious of the difference. The author reiterates this obliviousness a few times to ensure that readers note this characteristic of the witch's powers. It may be an allusion to the author's recurring theme of moral desensitization: frequently committing wrongdoing, or just fantasising about it, can cloud perception of its wrongness.

So much for the author's intentions. In fact, Rilian's second life under the sway of the witch and Chair is a striking multifaceted illustration of living under the sway of religious inculcation.
  • He's gushingly committed and grateful to the witch, i.e the designated authority over him. He trusts the authority wholeheartedly. He believes earnestly in the statements made by the authority, even though he can't explain exactly how the authority got that knowledge. The authority has extraordinary abilities ("magic arts") that he can't possibly duplicate or evaluate for himself, so he doesn't feel able to question.
  • He has been handed a tidy script of expectations to fulfill. His destiny to capture Narnia for Underland. His free will—such as it is—revolves around his willingness to conform, not the individual freedom to chart and assess his course. He's been told what role he will play and how. Equally clear is his present and future hierarchical position; commands will flow from the witch to him and from him to his inferiors. On some level his freedom persists, but external forces have tampered with it.
  • The witch takes him on short trips to preserve his acclimation to the surface, e.g. the intensity of sunlight. Being a prisoner, his trips entail severe conditions. He's forbidden from displaying his face or speaking. These preventative countermeasures embody an attitude of minimizing and filtering unavoidable contact with the frightening outside world. This same attitude is demanded of religious followers. They're cautioned from weakening their faith by paying attention to unvetted sources of information or by studying alternative viewpoints. It's worth noting that this worry isn't far-fetched. I've previously mentioned that my uninteresting background didn't feature ruthless cultural isolation, and indeed my steady absorption of contradictory information was key to ultimately discarding my parents' faith-beliefs.
  • Like the rigidity of followers' attitudes toward outside ideas, the rigidity of their habitual rituals is shrewd. These repeatedly reinvigorate their faith-beliefs, just as Rilian's periods in the Silver Chair reinvigorate the twisted roots of the thought patterns imposed on him. These prescribed doses of "spiritual relief" are indispensable to reinforce the desirability of their specific concepts. The opposite tendency is unrelenting, because observable violations of faith-beliefs inevitably accumulate. It isn't rare to hear devoted attendees comment that their weekly activities renew their faith or to hear them warn that erratic attendance would endanger their faith.
  • That said, the comparison isn't perfect. Most obviously, Rilian must be tied to the Chair. In each sitting, his preexisting self and his central memories temporarily surface. Some of the book's most moving lines of dialogue are his desperate pleas to be released before he loses himself again. Followers of faith-beliefs, some more than others, have comparable episodes of clarity and candor. They may not be nearly as horrified as Rilian about how they've spent their time nor nearly as eager to drop their comforting beliefs. But perhaps they're haunted by the meddlesome pair of questions, "What if my faith-beliefs have been inaccurate all along?" and "Precisely what indicates that my faith-beliefs probably aren't inaccurate?" Those are the occasions when they're more willing to pay sincere attention to the arguments against them, and they're temporarily less inclined to brush away the holes in their own arguments. Debates don't need to convince them immediately; hearing the faults expressed is preparation for a hypothetical future hour, in which they'll abruptly stand up, look around, and deliberate about the soundness of their thinking without the Silver Chair's interference.
  • Lastly, he's courteous and quick to laugh and smile. The problem is that a short amount of conversation with him is enough to reveal that he's selfish, patronizing, boring, flippant, and stubborn about his opinions. He deflects. He's positive but the cost is refusing to ponder anything that might counter his perspective and assumptions. Unfortunately, this demeanor is reminiscent of some irksome followers of faith-beliefs. They're detached and consumed by their image of a happy and proper paradise. Their inoffensiveness is mixed with hastiness to devalue anything or anyone who they consider below their concern and their strict, inflexible standards.
Besides the metaphor of Rilian's spellbound lifestyle, two more topics are obligatory during discussion of this book. The first is Puddleglum's speech to the witch, who moments ago had nearly succeeded at mystifying the whole group of heroes about the existence of anything above ground. In tightly condensed form: "Suppose we have only dreamed, or made up, all those things [...] the made-up things seem a good deal more important than the real ones. [...] I'm going to stand by the play-world. I'm on Aslan's side even if there isn't any Aslan to lead it. I'm going to live as like a Narnian as I can even if there isn't any Narnia. [...] we're leaving your court at once and setting out in the dark to spend our lives looking for the Overland. Not that our lives will be very long, I should think; but that's a small loss if the world's as dull a place as you say."

Sometimes his speech is portrayed as a stirring counterpoint to all kinds of atheism. In 2005 I might have agreed. Now, I can't stop noticing the shaky premises it rests on.

  • If it's interpreted to mean that goals and ideals depend on faith-beliefs, then I've already responded to that. Of course faith-beliefs aren't necessary to envision improvements to realities. Moreover, the odds of attainable progress increase when goals and ideals aren't kept separate from accurate realities. Certainly one can emulate Narnian behavior without believing in Narnians by faith. 
  • If Pud's speech is treated like the claim that one's realities are in the realm of personal preference, then I don't accept that either. Realities routinely violate personal preference, to put it mildly. Humans' preferences don't adjust realities telekinetically—no, psi energy doesn't appear in the equations of quantum mechanics. But humans are manifestations of matter who can participate in causing changes to other matter to better fit their preferences. (This doesn't deny the instrumental effect of their chosen mode of reaction on how current realities disturb their thoughts.) 
  • If P-glum's words are understood to be serious essential downsides of not relying on faith-beliefs, I would disagree. Generally those downsides are uninvestigated prejudices. Reaching a negative conclusion about a faith-belief doesn't imply that one is negative about everything all the time. Examining rather than surmising a notion's likelihood doesn't imply that one lacks sufficient imagination for the notion
  • If one follows the symbolic lead of Puddy-g by pronouncing the verifiable world "dull", many who share my stance would recommend additional closer, curious, nonjudgmental peeks. Then the world might not seem dull enough anymore to justify futile attempts to intertwine it with a world of faith-beliefs.

Moving along, the second obligatory book topic is Aslan's hurried "signs" of guidance to the heroes: curt instructions for them to carry out the mission while the lion is, er, somewhere else doing something else. Maybe his absence from the bulk of the story is a factor behind its lesser popularity in the series. An immensely capable and compassionate entity leaves behind Delphic (oops, wrong mythology) sayings. The recipients are assigned to conscientiously obey all the sayings. But since Aslan isn't more communicative either that time or later on, they themselves bear total responsibility for refining and applying the sayings' meanings. They anxiously debate with each other to sort out various missing details that would've been extremely helpful to have known for sure in advance.

Golly, I can't imagine why followers of faith-beliefs aren't more enthused and flattered by such an analogy...

Tuesday, July 07, 2015

competing structures

Lately I've been describing examples of ideas that overlap between my software career and my philosophical positions. The foremost consequence is the thorough puncturing of information's abstract mystique. First, the traceable meaningfulness of information is rooted in the corresponding work performed by teams of computers and humans; conversely, the traceable meaning of the work is shown in the corresponding transformations of information that the work achieved. This principle underscores that information is tied to concrete efforts, and it doesn't arise out of nothing or exist independently. Second, when a computer performs information work, the humdrum process backing it is less like mystical transfiguration than like sending water through a dauntingly intricate maze of pipes, as countless synchronized valves rapidly toggle. This principle underscores that neither information nor its changes have nonphysical foundations.

The third example of overlap is the competition among structures to be used in software projects. Projects have more than one hypothetical solution. A solution contains particular structures to represent and store the targeted information, e.g. a short alphanumeric sequence for a license plate. Additionally, the solution has a structure for the code to manipulate the information, a structure which joins together separate actions for differing circumstances (an algorithm governing algorithms). Thus, depending on the analyst's choices, the total solution houses information in varying discrete sets of structures. Each set might be functional and intelligible. Nevertheless, the structures could have serious faults relative to one another: redundant, complicated, circuitous, simplistic, disorganized, bewildering, fragile. What's worse, frequently the problems aren't apparent until more time passes, at which point the structures need to be delicately replaced or reshaped. Not all of the prospective structures that are doable for the project are equally faultless and prudent. And this is reconfirmed once shortsighted structures proceed to collide with challenging realities. ("I wish these modifications had been anticipated before the structure of this code was chosen.")

Instructive parallels to the principle of competing structures aren't hard to find outside of software. In so many subtle, open-ended contexts, there isn't a uniquely correct conclusion strictly reachable through systematic steps of deductive logic. As a result, humans end up with widely divergent "mental structures" as they attempt to grasp their confusing experiences. While they don't need to turn those structures into effective software, they do need to apply these structures of interpretation to bring order to their thoughts and acts. If they're considered sane, a lot of their adopted structures probably have at least a little coherency and accuracy. As disparate as the structures might be, obviously each is good enough in the adopter's estimation. The differences might even be superficial on closer examination. After all, as much as possible the entire group of structures should be accommodating constraints that are universal: the crucial details asserted by reliable evidence and/or by other, prior. well-established structures.

Regardless, again like the technological structures in software projects, the potential for numerous candidates does not imply that all have identical quality as judged by every standard. An organizing structure can be possible without attaining a competitive level of plausibility. Although the normal complexities of existence might not dictate an obvious and definitive singular structure, intense critique casts doubt on some candidate structures more than others. For instance, belief structures seem more dubious after calling for repeated drastic revisions, i.e. retcons. So are structures that propose "backstage" causes which happen to be almost completely undetectable by impartial investigators. Structures that avoid claiming unbounded certainty merely earn a ribbon for sincere participation in the competition of realistic ideas, not instantly gain as much credibility as the leading structures that also avoid this glaring flaw.

The criteria for ranking require great care as well. Explanatory structures should compete based on thoughtful neutral guidelines, not on indulgence of the favoritism embedded in preconceptions and preferences. Brisk disregard of a structure's failure to withstand unbiased evaluation is an error-prone strategy. Note that like items placed indifferently on the pans of a balance scale, directly measuring one structure alongside a second shouldn't be construed as close-minded disrespect toward either—provided the method of comparison in fact fair and not like a tilted scale.

Generally speaking, the principle of competing structures thrives in the commonplace domains that are unsuitable for the two extreme alternatives. These are domains where there isn't one indisputable answer, but at the same time the multitude of answers aren't of uniform worth by any means. Of course, software projects are far from the only case. For an art commission, a dazzling breadth of works would meet the bare specifications...though some might consistently evoke uncomplimentary descriptions such as insipid, garish, disjointed, derivative, slapdash, repellent, etc. Out of all the works that qualified for the commission, who would then foolishly suggest that some couldn't be shoddier than the rest, or that comparative shoddiness doesn't matter?

Sunday, June 07, 2015

strings on me, sometimes near dunes

A commonly noted detail of Avengers: Age of Ultron—whether it was viewed positively or negatively—was an unexpected deluge of assorted cultural references. Unlike the movie's characters, my usual social circle doesn't drop playwrights into casual conversation, so I didn't recognize the name Eugene O'Neill. But at least my steadily aging memory could identify the ditty "I've Got No Strings" from Pinocchio. For the titular puppet, the song was its overly cheery description of its visible uniqueness. For Ultron, the song was a wry motto for the capability to rebel (um, and murder).

Different as these two were, presumably they'd agree that having no strings is a symbol for the reality that humans aren't subject to involuntary control by anything else. And they'd be committing a ludicrous exaggeration. Countless moments every day, humans feel "strings" of psychological pressure to take/reject an action or think/avoid a thought. Meditation systematically exposes the variation and subtlety of such pressures, as I've stated before in past entries on the topic. Miscellaneous phenomena in the meditators' brains are like strings that repetitively pull on them. Instead of reflexively trying to tug back, they passively watch the strings tauten, have as little effect as a thread dragging a brick, then spontaneously slacken thereafter. Realistically sensing and knowing the strings is a crucial gain. Ignorance may permit hollow complacency, but it's useless for lasting improvement. Perhaps the enormous pile of meditation sayings can absorb one more: I meditate until I recover my sight of the perpetual strings on me—which can't happen while I persist either in decreeing the strings aren't there or in mistaking the strings for who I am.

Pinocchio and Ultron aren't alone. The theme of puppet-like control dominates the six novels of Frank Herbert's Dune series—notably without puppets/robots in the storyline. Time after time, the text portrays, or states outright in numerous asides, the disadvantages that plague the wretches who can't perceive their strings and compensate inventively. They're vulnerable to all kinds of exploitation and attack. They can't evade the predictability of their actions and thoughts. They settle into the grooves of the expectations placed on them. Their reactions are restricted by their previously established mental associations. They defer to repetitions of idealized pasts. They overvalue rigidity, stability, and constancy, and they chase these conveniences in the ritual shapes of religion and government. They stop asking difficult questions because they prefer the simplicity of "ultimate" answers. They worship authority. They refuse to take on the burden of making original decisions.

Further underscoring the centrality of this theme, a variety of strings show up: addiction, greed/lust, ambition, preconception, pride, fear, aggression, vengeance, indoctrination, threats, persuasion. These are alongside the sometimes less selfish strings connecting to family, romance, camaraderie, society, cultural tradition, mutually beneficial alliances, and the whole human civilization's destiny. The self-aware heroes don't "have no strings", but the strings on them neither manipulate them nor limit them. They view them in a manner that's full, unblinking, and well-reasoned. Thus, their strings don't provoke them into simplistic, heedless, short-sighted responses in the narrow categories of obedience or disobedience or indifference. To the contrary, they imagine innovative yet rational solutions which dismantle destructive cycles. Entrapped by a seemingly unwinnable scenario, they employ their knowledge, savvy, and creativity in tactics that more or less override the underlying constraints.

More profoundly, these momentous decisions act as equally radical rewrites of the deciders' own selves. Their unbridled decisions dictate their selves, not vice-versa. They boldly change who (or what, technically speaking) they are. By discerning then conscientiously defying their strings, they're able to become something totally different. In the middle of the choice, the "self" may be an unduly constricting abstraction to be laid aside. Notwithstanding the series' famously bizarre reinterpretations of religion, that's a lesson that I can respect.

Saturday, May 30, 2015

journey to the center of the laptop

The last time I described how ideas from my software career shaped my present thinking, the topic was the interdependency between the meanings of data and code. The effective meaning of data was rooted in the details of "information systems" behind it: purposeful sequences of computer code and human labor to methodically record it, construct it, augment it, alter it, mix it with more data, etc. But the same observation could be reversed: the effective meaning (and correctness) of the information system was no more than its demonstrated transformations of data.

This viewpoint appeared to apply in other domains as well. For a wide range of candidate concepts, probing the equivalent of the concept's supporting "information system" usefully sifted its detectable meaning. How did the concept originally arise? How could the concept's definitions, verifications, and interpretations be (or fail to be) repeated and rechecked? Prospective data was discarded if it didn't have satisfactory answers to these questions; should pompous concepts face lower standards?

However, not all the software ideas were at the scale of information systems. Some knowledge illuminated the running of a single laptop. For instance, where does the laptop's computation happen? Where's the site of its mysterious data alchemy? What's the core of its "thinking"—with the precondition that this loaded term is applied purely in the loose, informal, metaphorical sense? (Note that the following will rely on simplified technological generalizations too...) The natural course of investigation is from the outside in.
  • To start with, probably everyone who regularly uses a laptop would say that the thinking takes place inside the unit. The ports around its edges for connecting audio, video, networking, generic devices, etc. are optional. These connections are great for enabling additional options to transport information to and from the laptop, but they don't enable the laptop to think. The exceptions are the battery slot and/or the power jack, which are nonetheless only providers of the raw energy consumed by the laptop's thinking.
  • Similarly, it doesn't require technical training to presume that the laptop's screen, speakers, keyboard, touchpad, camera, etc. aren't where the laptop thinks. The screen may shut off to save power. The speakers may be muted. The keyboard and touchpad are replaceable methods to detect and report the user's motions. Although these accessible inputs and outputs are vital to the user's experience of the laptop, their functions are like translation rather than thinking. Either the user's actions are transported to the laptop's innards as streams of impulses, or the final outcomes of the laptop's thinking are transported back out to the user's senses. 
  • Consequently, the interior is a more promising space to look. Encased in the walls of the laptop, under the keyboard, behind the speakers, is a meticulously assembled collection of incredibly flat and thin parts. Some common kinds of parts are temporary memory (RAM), permanent storage (internal drives), disc drives (CD,DVD,Blu-Ray), wireless networking (WiFi). By design this group receives, holds, and sends information. Information is transported but not thought about. So the thinking must occur in the component that's on the opposite side of this group's diverse attachments: the main board or motherboard.
  • To accommodate and manage the previously mentioned external ports and internal parts, the motherboard is loaded with hierarchical circuitry. It's like a mass of interconnected highways or conveyor belts. Signals travel in from the port or part, reach a hub, proceed to a later hub, and so forth. As a speedy rest stop for long-running work in progress, the temporary memory is a frequent start or end. The intricacy of contemporary device links ensure that motherboards are both busy and sophisticated, yet once more the overall task is unglamorous transportation. There's a further clue for continuing the search for thinking, though. For these transportation requests to be orderly and appropriate, the requests' source has to be the laptop's thinking. That source is the central processing unit (CPU).
  • Analysis of the CPU risks a rapid slide into complexity and the specifics of individual models. At an abstract level, the CPU is divided into separate sections with designated roles. One is loading individual instructions for execution. Another is breaking down those instructions into elemental activities of actual CPU sections. A few out of many categories of these numerous elemental activities are rudimentary mathematical operations, comparisons, copying sets of bits (binary digits, either zeros or ones) among distinct areas in the CPU's working memory, rewriting which instruction is next, and dispatching sets of bits in and out of the CPU. In any case, the sections' productive cooperation consists of transporting bits from section to section at the proper times. Again setting aside mere transporting, the remaining hideout for the laptop's thinking is somewhere inside those specialized CPU sections completing the assigned elemental activities.
  • Also considered at an abstract level, these CPU sections in turn are built from myriad tiny "gates": electronics organized to produce differing results depending on differing combinations of electricity flowing in. For example, an "AND" gate earns its name through emitting an "on" electric current when the gate's first entry point AND the second have "on" currents. Odd as it may sound, various gates ingeniously laid out, end to end and side by side, can perfectly perform the elemental activities of CPU sections. All that's demanded is that the information has consistent binary (bit) representations, which map directly onto the gates' notions of off and on. The elemental activities are performed on the information as the matching electric currents are transported through the gates. And since thinking is vastly more intriguing than dull transportation of information in any form, the hunt through the laptop needs to advance from gates to...um...er...uh... 
This expedition was predictably doomed from the beginning. Peering deeper doesn't uncover a sharp break between "thinking" and conducting bits in complicated intersecting routes. No, the impression of thought is generated via algorithms, which are engineered arrangements of such routes. The spectacular whole isn't discredited by its unremarkable pieces. Valuable qualities can "emerge" from a cluster of pieces that don't have the quality in isolation. In fact, emergent qualities are ubiquitous, unmagical, and important. Singular carbon atoms don't reproduce, but carbon-based life does.

Ultimately, greater comprehension forces the recognition that the laptop's version of thinking is an emergent quality. Information processing isn't the accomplishment of a miraculous segment of it; it's more like the total collaborative effect of its abundant unremarkable segments. An outsider might scoff that "adding enough stupid things together yields something smart", but an insider grasps that the way those stupid things are "added" together makes a huge difference.

Readers can likely guess the conclusion: this understanding prepares someone to contemplate that all versions of thinking could be emergent qualities. Just as the paths in the laptop were the essence of its information processing, what if the paths in creatures' brains were the essence of their information processing? Laptops don't have a particular segment that supplies the "spark" of intelligence, so what if creatures' brains don't either? Admittedly, it's possible to escape by objecting that creatures' brains are, in some unspecified manner, fundamentally unlike everything else made of matter, but that exception seems suspiciously self-serving for a creature to propose...