Showing posts with label Philosophical Observations. Show all posts
Showing posts with label Philosophical Observations. Show all posts

Sunday, November 01, 2015

the trials of scopes

To the uninitiated, the claim to have superior reverence for Reason may sound as presumptuous and self-congratulatory as Libertarians claiming to have superior reverence for Liberty. Way back in 2012, when I was first getting acquainted with atheists on the internet and with organized atheistic "movements" of any kind, I balked a little at their penchant for publicly designating Reason as a distinguishing feature of their views. My shift from religious to atheistic didn't reflect that. Although the outcomes of my thoughts had greatly changed, my appreciation of logical deliberation had never changed throughout. I had always embraced the importance of closely examining ideas. I know now that my former view had an undercurrent of awful deficiencies, but blatant anti-intellectualism wasn't one. If it had been, would I have been as attentive to the counterarguments that ultimately persuaded me?

Later, I realized my misunderstanding. In these contexts, "Reason"—not to mention "Rationalist"—represented an assorted category of commendable methods to gather and test knowledge. And a major need met through introducing a distinct category, no matter its choice of name, was to specifically set up an opposite to the flawed method of faith (and the unrecommended methods of gullibility, ungrounded conjecture, and the substitution of wishes in place of facts). According to the wording they chose when they were advocating greater respect for Reason, I saw that they were in general agreement with my philosophies. In their more succinct language, like me they were stressing the need for carefully gauging ideas' amount of meaning/accuracy according to the quality of the verification of those ideas' implications. As usual, abstracted labels don't expose similarities and conflicts as effectively as unsummarized particulars. In my experience, likelier than not their views shared a lot with mine: my absence of belief in gods, and the broad outlines of the best ways to handle ideas, and the stance that the ideas of materialistic naturalism are the most probable fit for all the actual findings that have passed adequate standards.

Nevertheless, crisply invoking plain Reason as an explanation and/or justification has a small risk. It might entice followers of faith-beliefs into once again attempting to gloss over or rationalize the key differences in their beliefs' premises. They may suggest that materialistic naturalism is also a faith (requires the method of faith) or suggest that its notion of authority is equivalent. To that end, they're pleased to detect the faintest whiff of immaterial stuff in materialistic naturalism. It offers the immediate opportunity to pronounce that this view is relatively incomplete, hypocritical, illogical, etc. Then they may swiftly declare that anyone who holds to this view is forced to concede that Reason is their own version of an immaterial foundation; like them, their view is "really" dependent on the existence of something ghostly. At the same time, they certainly won't mention the meagerness of this concession. Even to concede the existence of a spiritual being entirely synonymous with Reason, such as a Cosmic Legislator behind the "laws" of nature, would still be foreign to the anthropomorphic content of the commoner faith-beliefs. It would be a relative of Deism's aloof god.

Fortunately, I don't define Reason in those disconnected phantasmal terms, and as stated earlier, I'm sure that I'm not alone in this. My longstanding position is that Reason's existence is embodied in countless concrete acts. These acts include work carried out by human brains, despite the apparent passivity of some acts like direct perception. (Actually transforming the sense organs' nerve impulses into usable information is a convoluted process that incorporates expectations and judgments.) Except metaphorically, I don't see Reason as a guide, an oracle, or an overlord. Reason is an ideal to measure against each act, one by one. It's largely about honoring consistency. Acts that are in accordance with Reason, such as cautious deductions and generalizations, are beneficial due to the reliable consistencies of encountered realities. Reasoners can confidently let causality be causality because the reasoned links between causes and effects echo these reliable consistencies—though the links aren't easily disentangled.

This prosaic depiction of Reason has an inherent cost. Since these acts occur in complex, unique, concrete contexts, the acts' results aren't necessarily applicable to other contexts. The importance of context shouldn't be underestimated. The analogous limitation in software development is known as the scope of an entity in software. For instance, the entity could be a single running total. By necessity, software code is broken up into manageable sections. To keep the meanings of entities predictable at all times, each entity is associated with a set scope of one or more code sections. It can be simply referenced, and possibly manipulated, only by the code sections in its scope. Of course, the section in which it is first introduced is its primary or native scope. Before an entity is manipulated in a separate scope, it must be explicitly exported there, or duplicated to a new entity there, or mixed into one or more of the preexisting entities there, or retrieved from across scopes through a completely descriptive longer version of its name (i.e. more like a full mailing address for the entity).

The relevant similarity is that just as the scope of each software entity doesn't automatically extend outside of the code section it came from, clear-sighted appraisal of an act of Reason recognizes that its correctness has a scope that doesn't automatically extend outside of the context that led to it. Additional contexts might be in the scope. But neither acts nor contexts are all alike, so there can't be generic rules for determining scopes. Indeed, the boundaries of the acts' scopes might be unknown without experimentation. Near a phase transition, changes in heat have smoothly changing effects...until the phase transition happens. The phase transition violates acts of Reason on either side of it. Neither do pure mathematics escape scopes. The interior angles of a triangle have a sum of 180°...but that could be false for Non-Euclidean geometries. The result of multiplication doesn't hinge on the order of the factors...but that could be false for matrices. Numerous realities remain appallingly unsympathetic to human longings for smooth simplicity.

That said, the relentless accumulation of human acts of Reason has established, and repeatedly confirmed, an array of conclusions with undeniably wide scopes: elements, particles, fundamental forces, energy conversions. These conclusions weren't spoken by a singular thing, entitled "Reason", which humans idolized and begged for answers. Acts were committed, in a lurching sequence, until these conclusions' wide scopes were surveyed.

The question of scope is complicated, but it's not inconsequential hair-splitting. It's exactly why commonplace anecdotes—testimonies—have low validity. On its own, the highly specific context of an anecdote shouldn't be accepted as a decisive source for acts of "Reason" with prematurely extensive scopes. The reasoner should at minimum have plausible arguments for assuming that to be the case. And perhaps they should postpone that assumption altogether until after more acts of mental study and physical investigation.

Lastly, if Reason is a collection of acts, with shrewdly scrutinized scopes, performed by humans, then it functions as more than a category or an ideal. It's a spreading cultural custom of confronting ideas' accuracy in credible ways. I absorbed this to some degree in my conventional upbringing in contemporary America, but the cultural customs of my faith-belief started out more dominant nonetheless. At first, these two groups of customs persisted in a sometimes restless truce that divided up the territory of my beliefs. The start of the end was when I permitted myself to candidly compare the substance of the two. I didn't exchange my loyalties for a new metaphysical master. There wasn't a shining light of irresistible Reason compelling me to abruptly embrace "self-evidently right" opinions on all issues. All that happened was the crucial decision to more fully pursue the customs of the culture with the stronger corroborations, regardless of where the other disagreed on the details uncovered. It wasn't my fault that a thorough commitment to the one, combined with increasing quantities of information, ended up eroding the whole structure of the other...

Monday, September 07, 2015

thank you for not smoking

The previous concept reapplied from software was the black box analysis technique. The technique metaphorically places something inside a black box, which signifies avoidance of direct scrutiny or even identification. The something's effects are examined instead, thereby circumventing the interference or the labor of knowing the something and its inner workings. The analysis proceeds through the factual details of various interactions between the something and its environment.

It's highly relevant to the goal of objective testing, because it avoids prejudices. The act of inspection is entangled in the inspector's slanted perspective, while black box tests compare clear-cut outcomes to uninfluenced expectations. If external outcomes don't satisfy neutral and sensible criteria then the something should be reevaluated, regardless of who/what it is and the characteristics it supposedly has within.

Beyond black boxes, the topic of testing software includes another broadly useful concept: smoke tests. These are rapid, shallow, preliminary, unmistakable checks that the software is minimally serviceable. The name comes from the analogy of activating electronic equipment and just seeing if it smokes. A smoke test of software runs the smallest tasks. Can it start? Can it locate the software that it teams up with? Can it load its configuration? Can it produce meaningful output at all?

No specialized expertise is necessary to notice that smoke tests are vital but also laughably inadequate. Since the software must pass much more rigorous tests, it's logical to question why smoke tests are worthwhile to perform more than once on the same software. However, the bare fact is that software seldom stays the same, especially in the middle of furious development. Thus the worth of smoke tests is more for quickly determining when a recent modification is gravely problematic. A malfunctioning smoke test implies the need to reconsider the recent modification and rectify it—probably very soon in order to prevent delays in the overall schedule.

The surprise is that smoke tests resemble a mental tactic that shows up in various informal philosophizing. Like software developers who screen their attempts with smoke tests and then promptly fix and retry in the event of failure, a follower of a belief may repeatedly rethink its specifics until it's acceptable according to the equivalent of a smoke test. In essence the follower has a prior commitment to a conclusion, which they purposely reshape so that it at least doesn't "smoke". This tactic greatly differs from carefully proposing a tentative claim after collecting well-founded corroboration. And it differs from the foundation of productive debate: the precondition that the debaters' arguments are like orderly chains from one step to the next, not like lumps of clay that continually transform to evade objections.

As might be expected, the smoke test tactic easily leads to persistent misunderstandings about aims. The unambitious aim of the tactic is a pruned belief that isn't flagrantly off-base, not a pristine belief that's most likely to be accurate. A few belief smoke tests are absurdity, contradiction with solidly established information, violation of common contemporary ethics, and so forth. (The changes might qualify as retcons.) Before they show the candor to concede that their aim is a treasured belief that isn't transparently wrong, rather than the novel belief that's plausibly right, they're mired in a loop of mending belief by trial and error.

They may justify the tactic by saying, "Of course I can't profess the most uncomplicated, unswerving variant of my belief. I know that variant can't be correct. It would be too [absurd, barbaric, intolerant, naive, infeasible, bizarre, self-contradictory]. I use my best understanding to strengthen the weak points that ring false. Doesn't everyone? Why's that a reason for criticism?"

This rationale is persuasive; to revise beliefs over time is no shortcoming. The telling difference is that everyone else isn't using the tactic on beliefs portrayed as complete, authoritative, correct, and self-supporting. It presents two issues in that case. First, why would the belief have been communicated in such a way that the recipients need to make fine-grained clarifications for the sake of succeeding at smoke tests—which are exceedingly basic, after all? Second, once someone has begun increasingly reworking the original belief to comply with their sense of reasonableness, when does the belief itself stop being a recognizable, beneficial contributor to the result? Is it not a bad sign when something requires numerous manual interventions, replacement of parts, and gentle handling, or else it swiftly proceeds to belch embarrassing smoke?

Sunday, August 02, 2015

black boxes blocking baseless bias

Considering the proportion of time filled by a full-time career, its thought patterns carve deep grooves. Hence the blog winds up with entries musing on the wider application of software patterns like, say, competing structures. In a software project, diverse structures of data and code could all be part of doable solutions. But the project allows only one solution. Not all of these structures have equal quality, so a competition is appropriate. Meanwhile in the philosophical domain and elsewhere, humans contrive diverse mental structures for the "project" of thinking and acting within their puzzling realities. And it shouldn't be verboten for these structures of uneven quality to fairly compete.

That's the toughest obstacle in practice: defining and applying legitimately equitable standards of comparison. Whenever evaluators have decided beforehand that the structures they endorse will be superior, then their tendency is to choose and distort the standards to assure it. The ones committed to candor readily admit this; even better, if they're confident then they welcome offers of separate reviews that will validate the credibility of their own.

Luckily, the ceaseless struggle to approach ideas with less partisanship has another pattern back in the technological domain. The common black box technique refers to analyzing something purely via the stuff entering and exiting it. Knowledge about the thing, and its contents, is excluded for whatever reason. Conceptually, the thing is hidden inside a black box with little holes for stuff to pass through. On a diagram, multiple arrows go to and from the box, but nothing is written in the box except its name. As a side note, a representation of a single, huge, unexamined thing containing miscellaneous parts, such as an external computer network, might have been drawn as a bumpy cloud to emphasize its vague "shape" and "size".

The black box analysis is simplified and undeniably easier to manage as a consequence. Sometimes, depending on the task, the thing's innards are mostly off-topic. To smoothly interact with the thing, the more crucial details are what agreed-upon stuff will come out (or occur) after agreed-upon stuff goes in. Without condensed black box abstractions, the modern industrial age of specialized, interchangeable technology would be infeasible. Everyone would need to know an excessive amount about the individually complex pieces merely to construct a functioning whole. This is an equally essential ingredient of software. With published protocols and data formats, software can handle other software as black box peers which accept and emit lucid messages. Broad classes of compliant software can profitably cooperate.

Overall, an extensive black box description is invaluable for something that's largely unknown—by design or by circumstance. In contrast, the value of a black box description for something that's largely known is less intuitive. It hinges on the recognition that too-close familiarity with something might build a deceptive or incomplete impression of its satisfactoriness. When the something is software, it's only logical that its industrious writer is unaware of its oversights, else they wouldn't have written the oversights. At writing time, they may have framed their solution too narrowly to enclose the project's range of subtleties. Later, their ongoing frame goes on to prevent them from imagining tests capable of exposing the cramped, inadequate boundaries.

Mistakes of oversight are rivaled by occasionally embarrassing mistakes of "transcription": the writer failed to faithfully encode their original intent. They wanted to read memory location Q but they wrote code that reads J. Once again, it's only logical that such mistakes wouldn't survive if the writer's own firsthand experience caught every gaffe they introduced. They may have been distracted. Depressingly often, disorganization gradually accumulates in the code segment. Or, in a less forgivable offense, it's confusingly expressed from the outset. As a result, although they're staring directly at a mistake, they're distracted by the onerous strain of deciphering and tracking the bigger picture.

Less specifically, the sizable value of black box analysis for a largely known something lies in cross-checking the fallible judgments of "insiders" about that something. Placing it in a black box counteracts the hypothetical shortcomings of the insiders' entanglement. It includes putting aside comprehensive information about something's unique identity and full set of characteristics, and putting aside other connections/relationships, and putting aside appeal/repulsiveness. It's the candid, untainted estimation of whether the something's observed "footprints" match levelheaded expectations in pertinent contexts. The writer's admirable pride of craftsmanship doesn't attest that the supposedly finished software unit operates acceptably in all probable cases.

This practice's basic features are visible throughout innumerable domains, though with varied titles. It chimes with the "blinding" of subjects in experiments and surveys of customer tastes. (In an unusually palpable manifestation of the metaphor, part of the blinding procedure might employ nondescript, opaque boxes.) Blinding forces them to assess the sample with the sole attribute they can sense. From their view, the sample's source is in a black box. A second example is the services of an editor. They can approve and/or modify sections of a draft document according to the unprepared reactions it elicits in them. Unlike the submitter, they aren't an "expert" at knowing what it's meant to convey. They don't feel the submitter's strong sentimental attachments. They have a greater chance of encountering the draft itself. Where the editor is concerned, the labor behind it doesn't affect their revisions. The draft came out of a black box.

A third example is in the same theme, albeit more cerebral. It's the strategy of, after a long session of work on a preliminary creation, reserving time away before revisiting it. In the heart of the session, the creation is summarizing a portion of the creator's stream of consciousness. Therefore the contemporaneous brain activity grants them the perfect ability to effortlessly compensate for the creation's ambiguities and awkward aspects. To them in that instant, the creation's "seamless" substance and beauty are impossible to miss. When they return, their brain's state isn't enmeshed with the creation. They take a fresh look at its pluses and minuses in isolation. This is akin to the advice of not transforming a late-night brainstorm into irreversible actions until pondering it next morning. Interestingly, the something in the black box is the past configuration of the brain currently reexperiencing that past configuration's product, i.e. the creation/brainstorm. The critical difference is that the product isn't rubber-stamped due to where it came from—whose brain it rippled out of. No, the caliber of the product discloses the worthiness of whatever produced it, in this instance a past brain configuration. (It might be uncomplimentary. "My brain was really mesmerized by that tangent, but this is unusable nonsense.")

Despite its encouragingly widespread and timeless scope, black box-style thinking is a supplemental tool with inherent limits. It's for temporarily redirecting attention to the external symptoms of something's presence. Its visual counterpart is a sketch of a silhouette. It doesn't capture something's essence. It's not an explanation; on its own, a lengthy historical listing doesn't reliably predict responses to novel situations.

The epitome of an area dominated by these caveats is human conduct. Without question, the brain's convoluted character precludes painless black box analysis for rigorously unraveling how it runs. It exhibits context-dependent overrides of overrides of overrides...or it might not. Trends discovered during good tempers may have little relation to bad tempers. Or mannerisms connected to one social group may have little relation to mannerisms connected to contrasting social groups. Or a stranger switches among several conscious (or unconscious) guises, aimed at selectively steering the verdicts of unacquainted onlookers. The stranger is in a black box to the onlookers. The guises are collections of faked signals chosen to misinform the onlookers' analysis of that black box.

Caveats notwithstanding, entire societies heavily regulate members through black boxes of human conduct. (As a popular song from the early nineties famously didn't proclaim, "Here we are now: in containers.") Members are efficiently pigeonholed by unsophisticated facts about their deeds. In the society, facts of that type serve as decisive announcements of the member's inner nature. So, members who wish to be seen a certain way are obliged to adhere to the linked mandates. No extra particulars about them are accounted for. For this purpose, they're in a black box. It appears callous at first glance, but it exemplifies the earlier statements about the value of shortcuts for working with something that's largely unknown. When societies reach massive scales, it's impossible for members to obtain penetrating awareness of every other member. Like before, black box understandings ease interactions with scarce information about either party, becauses the pair can foresee what will transpire between them.

Furthermore, black box analysis of human conduct shares the advantages stated earlier for inspecting something that's largely known. The effectiveness is lowered by the caveats of this area but not eliminated altogether. It's more than adequate for imposing sharp, sensible thresholds on other findings. "If I didn't know them as well as I do, and they acted the same as they have in the situations I know of, would there be a disparity in how I esteem them? If there is, do I have a well-founded excuse for it? At some point, my firmest convictions about who they are should be aligning to some degree with their acts...."

Tuesday, July 07, 2015

competing structures

Lately I've been describing examples of ideas that overlap between my software career and my philosophical positions. The foremost consequence is the thorough puncturing of information's abstract mystique. First, the traceable meaningfulness of information is rooted in the corresponding work performed by teams of computers and humans; conversely, the traceable meaning of the work is shown in the corresponding transformations of information that the work achieved. This principle underscores that information is tied to concrete efforts, and it doesn't arise out of nothing or exist independently. Second, when a computer performs information work, the humdrum process backing it is less like mystical transfiguration than like sending water through a dauntingly intricate maze of pipes, as countless synchronized valves rapidly toggle. This principle underscores that neither information nor its changes have nonphysical foundations.

The third example of overlap is the competition among structures to be used in software projects. Projects have more than one hypothetical solution. A solution contains particular structures to represent and store the targeted information, e.g. a short alphanumeric sequence for a license plate. Additionally, the solution has a structure for the code to manipulate the information, a structure which joins together separate actions for differing circumstances (an algorithm governing algorithms). Thus, depending on the analyst's choices, the total solution houses information in varying discrete sets of structures. Each set might be functional and intelligible. Nevertheless, the structures could have serious faults relative to one another: redundant, complicated, circuitous, simplistic, disorganized, bewildering, fragile. What's worse, frequently the problems aren't apparent until more time passes, at which point the structures need to be delicately replaced or reshaped. Not all of the prospective structures that are doable for the project are equally faultless and prudent. And this is reconfirmed once shortsighted structures proceed to collide with challenging realities. ("I wish these modifications had been anticipated before the structure of this code was chosen.")

Instructive parallels to the principle of competing structures aren't hard to find outside of software. In so many subtle, open-ended contexts, there isn't a uniquely correct conclusion strictly reachable through systematic steps of deductive logic. As a result, humans end up with widely divergent "mental structures" as they attempt to grasp their confusing experiences. While they don't need to turn those structures into effective software, they do need to apply these structures of interpretation to bring order to their thoughts and acts. If they're considered sane, a lot of their adopted structures probably have at least a little coherency and accuracy. As disparate as the structures might be, obviously each is good enough in the adopter's estimation. The differences might even be superficial on closer examination. After all, as much as possible the entire group of structures should be accommodating constraints that are universal: the crucial details asserted by reliable evidence and/or by other, prior. well-established structures.

Regardless, again like the technological structures in software projects, the potential for numerous candidates does not imply that all have identical quality as judged by every standard. An organizing structure can be possible without attaining a competitive level of plausibility. Although the normal complexities of existence might not dictate an obvious and definitive singular structure, intense critique casts doubt on some candidate structures more than others. For instance, belief structures seem more dubious after calling for repeated drastic revisions, i.e. retcons. So are structures that propose "backstage" causes which happen to be almost completely undetectable by impartial investigators. Structures that avoid claiming unbounded certainty merely earn a ribbon for sincere participation in the competition of realistic ideas, not instantly gain as much credibility as the leading structures that also avoid this glaring flaw.

The criteria for ranking require great care as well. Explanatory structures should compete based on thoughtful neutral guidelines, not on indulgence of the favoritism embedded in preconceptions and preferences. Brisk disregard of a structure's failure to withstand unbiased evaluation is an error-prone strategy. Note that like items placed indifferently on the pans of a balance scale, directly measuring one structure alongside a second shouldn't be construed as close-minded disrespect toward either—provided the method of comparison in fact fair and not like a tilted scale.

Generally speaking, the principle of competing structures thrives in the commonplace domains that are unsuitable for the two extreme alternatives. These are domains where there isn't one indisputable answer, but at the same time the multitude of answers aren't of uniform worth by any means. Of course, software projects are far from the only case. For an art commission, a dazzling breadth of works would meet the bare specifications...though some might consistently evoke uncomplimentary descriptions such as insipid, garish, disjointed, derivative, slapdash, repellent, etc. Out of all the works that qualified for the commission, who would then foolishly suggest that some couldn't be shoddier than the rest, or that comparative shoddiness doesn't matter?

Saturday, May 30, 2015

journey to the center of the laptop

The last time I described how ideas from my software career shaped my present thinking, the topic was the interdependency between the meanings of data and code. The effective meaning of data was rooted in the details of "information systems" behind it: purposeful sequences of computer code and human labor to methodically record it, construct it, augment it, alter it, mix it with more data, etc. But the same observation could be reversed: the effective meaning (and correctness) of the information system was no more than its demonstrated transformations of data.

This viewpoint appeared to apply in other domains as well. For a wide range of candidate concepts, probing the equivalent of the concept's supporting "information system" usefully sifted its detectable meaning. How did the concept originally arise? How could the concept's definitions, verifications, and interpretations be (or fail to be) repeated and rechecked? Prospective data was discarded if it didn't have satisfactory answers to these questions; should pompous concepts face lower standards?

However, not all the software ideas were at the scale of information systems. Some knowledge illuminated the running of a single laptop. For instance, where does the laptop's computation happen? Where's the site of its mysterious data alchemy? What's the core of its "thinking"—with the precondition that this loaded term is applied purely in the loose, informal, metaphorical sense? (Note that the following will rely on simplified technological generalizations too...) The natural course of investigation is from the outside in.
  • To start with, probably everyone who regularly uses a laptop would say that the thinking takes place inside the unit. The ports around its edges for connecting audio, video, networking, generic devices, etc. are optional. These connections are great for enabling additional options to transport information to and from the laptop, but they don't enable the laptop to think. The exceptions are the battery slot and/or the power jack, which are nonetheless only providers of the raw energy consumed by the laptop's thinking.
  • Similarly, it doesn't require technical training to presume that the laptop's screen, speakers, keyboard, touchpad, camera, etc. aren't where the laptop thinks. The screen may shut off to save power. The speakers may be muted. The keyboard and touchpad are replaceable methods to detect and report the user's motions. Although these accessible inputs and outputs are vital to the user's experience of the laptop, their functions are like translation rather than thinking. Either the user's actions are transported to the laptop's innards as streams of impulses, or the final outcomes of the laptop's thinking are transported back out to the user's senses. 
  • Consequently, the interior is a more promising space to look. Encased in the walls of the laptop, under the keyboard, behind the speakers, is a meticulously assembled collection of incredibly flat and thin parts. Some common kinds of parts are temporary memory (RAM), permanent storage (internal drives), disc drives (CD,DVD,Blu-Ray), wireless networking (WiFi). By design this group receives, holds, and sends information. Information is transported but not thought about. So the thinking must occur in the component that's on the opposite side of this group's diverse attachments: the main board or motherboard.
  • To accommodate and manage the previously mentioned external ports and internal parts, the motherboard is loaded with hierarchical circuitry. It's like a mass of interconnected highways or conveyor belts. Signals travel in from the port or part, reach a hub, proceed to a later hub, and so forth. As a speedy rest stop for long-running work in progress, the temporary memory is a frequent start or end. The intricacy of contemporary device links ensure that motherboards are both busy and sophisticated, yet once more the overall task is unglamorous transportation. There's a further clue for continuing the search for thinking, though. For these transportation requests to be orderly and appropriate, the requests' source has to be the laptop's thinking. That source is the central processing unit (CPU).
  • Analysis of the CPU risks a rapid slide into complexity and the specifics of individual models. At an abstract level, the CPU is divided into separate sections with designated roles. One is loading individual instructions for execution. Another is breaking down those instructions into elemental activities of actual CPU sections. A few out of many categories of these numerous elemental activities are rudimentary mathematical operations, comparisons, copying sets of bits (binary digits, either zeros or ones) among distinct areas in the CPU's working memory, rewriting which instruction is next, and dispatching sets of bits in and out of the CPU. In any case, the sections' productive cooperation consists of transporting bits from section to section at the proper times. Again setting aside mere transporting, the remaining hideout for the laptop's thinking is somewhere inside those specialized CPU sections completing the assigned elemental activities.
  • Also considered at an abstract level, these CPU sections in turn are built from myriad tiny "gates": electronics organized to produce differing results depending on differing combinations of electricity flowing in. For example, an "AND" gate earns its name through emitting an "on" electric current when the gate's first entry point AND the second have "on" currents. Odd as it may sound, various gates ingeniously laid out, end to end and side by side, can perfectly perform the elemental activities of CPU sections. All that's demanded is that the information has consistent binary (bit) representations, which map directly onto the gates' notions of off and on. The elemental activities are performed on the information as the matching electric currents are transported through the gates. And since thinking is vastly more intriguing than dull transportation of information in any form, the hunt through the laptop needs to advance from gates to...um...er...uh... 
This expedition was predictably doomed from the beginning. Peering deeper doesn't uncover a sharp break between "thinking" and conducting bits in complicated intersecting routes. No, the impression of thought is generated via algorithms, which are engineered arrangements of such routes. The spectacular whole isn't discredited by its unremarkable pieces. Valuable qualities can "emerge" from a cluster of pieces that don't have the quality in isolation. In fact, emergent qualities are ubiquitous, unmagical, and important. Singular carbon atoms don't reproduce, but carbon-based life does.

Ultimately, greater comprehension forces the recognition that the laptop's version of thinking is an emergent quality. Information processing isn't the accomplishment of a miraculous segment of it; it's more like the total collaborative effect of its abundant unremarkable segments. An outsider might scoff that "adding enough stupid things together yields something smart", but an insider grasps that the way those stupid things are "added" together makes a huge difference.

Readers can likely guess the conclusion: this understanding prepares someone to contemplate that all versions of thinking could be emergent qualities. Just as the paths in the laptop were the essence of its information processing, what if the paths in creatures' brains were the essence of their information processing? Laptops don't have a particular segment that supplies the "spark" of intelligence, so what if creatures' brains don't either? Admittedly, it's possible to escape by objecting that creatures' brains are, in some unspecified manner, fundamentally unlike everything else made of matter, but that exception seems suspiciously self-serving for a creature to propose... 

Saturday, May 02, 2015

data : code :: concept : verification

I've sometimes mused about whether my eventual embrace of a Pragmatism-esque philosophy was inevitable. The ever-present danger in musings like this is ordinary hindsight bias: concealing the actual complexity after the fact with simple, tempting connections between present and past. I can't plausibly propose that the same connections would impart equal force on everyone else. In general, I can't rashly declare that everyone who shares one set of similarities with me is obligated to share other sets of similarities. Hastily viewing everyone else through the tiny lens of myself is egocentrism, not well-founded extrapolation.

For example, I admit I can't claim that my career in software development played an instrumental role in the switch. I know too many competent colleagues whose beliefs clash with mine. At the same time, a far different past career hasn't stopped individuals in the Clergy Project from eventually reaching congenial beliefs. Nevertheless, I can try to explain how some aspects of my specific career acted as clues that prepared and nudged me. My accustomed thought patterns within the vocational context seeped into my thought patterns within other contexts.

During education and on the job, I encountered the inseparable ties between data and code. Most obviously, the final data was the purpose of running the code (in games the final data was for immediately synthesizing a gameplay experience).  Almost as obvious, the code couldn't run without the data flowing into it. Superficially, in a single ideal program, code and data were easily distinguishable collaborators taking turns being perfect. Perhaps a data set went in, and a digest of statistical measurements came out, and the unseen code might have ran in a machine on the other side of the internet.

At a more detailed level of comprehension, and in messy and/or faulty projects cobbled together from several prior projects, that rosy view became less sensible. When final data was independently shown to be inaccurate, the initial cause was sometimes difficult to deduce. Along the bumpy journey to the rejected result, data flowed in and out of multiple avenues of code. Fortunately the result retained meaningfulness about the interwoven path of data and code that led to it, regardless of its regrettable lack of meaningfulness in regard to its intended purpose. It authentically represented a problem with that path. Thus its externally checked mistakenness didn't in the least reduce its value for pinpointing and resolving that path's problems.

That wasn't all. The reasoning applied to flawless final data as well, which achieved two kinds of meaningfulness. Its success gave it metaphorical meaningfulness in regard to satisfying the intended purpose. But it too had the same kind of meaningfulness as flawed final data: literal meaningfulness about the path that led to it. It was still the engineered aftereffect of a busy model built out of moving components of data and code—a model ultimately made of highly organized currents of electricity. It was a symbolic record of that model's craftsmanship. Its accurate metaphorical meaning didn't erase its concrete roots.

The next stage of broadening the understanding of models was to incorporate humans as components—exceedingly sophisticated and self-guiding components. They often introduced the starting data or reviewed the ultimate computations. On top of that, they were naturally able to handle the chaotic decisions and exceptions that would require a lot more effort to perform with brittle code. Of course the downside was that their improvisations could derail the data. Occasionally, the core of an error was a human operator's unnoticed carelessness filling in a pivotal element two steps ago. Or a human's assumptions for interpreting the data were inconsistent with the assumptions used to design the code they were operating.

In this sense, humans and code had analogous roles in the model. Each were involved in carrying out cooperative series of orderly procedures on source data and leaving discernible traces in the final data. The quality of the final data could be no better than the quality of the procedures (and the source data). A model this huge was more apt to have labels such as "business process" or "information system", abbreviated IS. Cumulatively, the procedures of the complete IS acted as elaborations, conversions, analyses, summations, etc. of the source data. Not only was the final data meaningful for inferring the procedures behind it, but the procedures in turn produced greater meaningfulness for the source data. Meanwhile, they were futilely empty, motionless, and untested without the presence of data.

Summing up, data and code/procedures were mutually meaningful throughout software development. As mystifying as computers appeared to the uninitiated, data didn't really materialize from nothing. Truth be told, if it ever did so, it would arouse well-justified suspicion about its degree of accuracy. "Where was this figure drawn from?" "Who knows, it was found lying on the doorstep one morning." Long and fruitful exposure to this generalization invited speculation of its limits. What if strict semantic linking between data and procedures weren't confined to the domain of IS concepts?

A possible counterpoint was repeating that these systems were useful but also deliberately limited and refined models of complex realities. Other domains of concepts were too dissimilar. Then...what were those unbridgeable differences, exactly? What were the majority of beneficial concepts, other than useful but also deliberately limited and refined models? What were the majority of the thoughts and actions to verify a concept, other than procedures to detect the characteristic signs of the alleged concept? What were the majority of lines of argument, other than abstract procedures ready to be reran? What were the majority of secondary cross-checks, other than alternative procedures for obtaining equivalent data? What were the majority of serious criticisms to a concept, other than criticisms of the procedures justifying it? What were the majority of definitions, other than procedures to position and orient a concept among other known concepts?

For all that, it wasn't that rare for these other domains to contain some lofty concepts that were said to be beyond question. These were the kind whose untouchable accuracy was said to spring from a source apart from every last form of human thought and activity. Translated into the IS perspective, these were demanding treatment like "constants" or "invariants": small, circular truisms in the style of "September is month 9" and "Clients have one bill per time period". In practice, some constants might need to change from time to time, but those changes weren't generated via the IS. These reliable factors/rules/regularities furnished a self-consistent base for predictable IS behavior.

Ergo, worthwhile constants never received and continually contributed. They were unaffected by data and procedures yet were extensively influential anyway. They probably had frequent, notable consequences elsewhere in the IS. Taken as a whole, those system consequences strongly hinted the constants at work—including tacit constants never recognized by the very makers of the system. Like following trails of breadcrumbs, with enough meticulous observation, the backward bond from the system consequences to the constants could be as certain as the backward bond from data to procedures.

In other words, on the minimal condition that the constants tangibly mattered to the data and procedures of the IS, they yielded accountable expectations for the outcomes and/or the running of the IS. The principle was more profound when it was reversed: total absence of accountable expectations suggested that the correlated constant itself was either absent or at most immaterial. It had no pertinence to the system. Designers wishing to conserve time and effort would be advised to ignore it altogether. It belonged in the routine category "out of system scope". By analogy, if a concept in a domain besides IS declined the usual methods to be reasonably verified, and distinctive effects of it weren't identifiable in the course of reasonably verifying anything else, then it corresponded to neither data nor constants. Its corresponding status was out of system scope; it didn't merit the cost of tracking or integrating it.

As already stated, the analogy wasn't undeniable nor unique. It didn't compel anyone with IS expertise to reapply it to miscellaneous domains, and expertise in numerous fields could lead to comparable analogies. There was a theoretical physical case for granting it wide relevance, though. If real things were made of matter (or closely interconnected to things made of matter), then real things could be sufficiently represented with sufficient quantities of the data describing that matter. If matter was sufficiently represented, including the matter around it, then the ensuing changes of the matter were describable with mathematical relationships and thereby calculable through the appropriate procedures. The domain of real things qualified as an IS...an immense IS of unmanageable depth which couldn't be fully modeled, much less duplicated, by a separate IS feasibly constructed by humans.

Monday, January 19, 2015

transitive corroboration not enigmatic authorities

The last entry considered the shallow misconception that dissenters from faith-beliefs insist on evaluating each statement like a scientific hypothesis. But that exaggerated misconception distracted from the actual recommended evaluation strategy, which was much simpler: evaluate the quantity and quality of corroboration. Corroboration happens through many strategies, and not all are applicable to all statements. Realities form a mosaic, so corroboration has many diverse data sources too. It can be complicated in practice. It involves careful judgment. Anyone who's been part of a jury would agree.

However, for the sake of contrasting the attitudes of typical dissenters from followers, one aspect is key and worthy of elaboration: the corroboration of secondhand statements. Candidly, for the majority of statements, neither of the two groups ordinarily has feasible opportunities to obtain firsthand corroboration. They must rely on secondhand statements filtered by additional criteria. The problem is that this common dependence on secondhand corroboration can lead to false comparisons ("We're not so different, you and I!") and then to misunderstandings and stereotypes.

Within the mentality of loyal followers, the supreme criterion for a secondhand statement is nothing more than the authoritativeness of whoever produced it. Thus they think that they differ from dissenters over nothing more than which authorities to revere. Followers of faith-beliefs can mistakenly suggest that every variant of atheism qualifies as a faith with competing cosmic dogmas and stories and laws. Or they can mistakenly suggest that disregarding uncorroborated statements is the same as closed-mindedness. Of course, "postmodern" followers are the most enthusiastic about this; according to them, statements stem from an authority's narrative, different authorities have different narratives, and no narrative is more broadly correct than any other.

But this notion of indisputable authorities is precisely backwards or at least too gullibly lopsided. Truthfully, they might often be valuable sources for corroboration...if their corroborating statements are themselves corroborated. Despite their proud claims to the contrary, they aren't immune to the need for corroboration. A more elementary version is that you must show your work to earn full credit, no matter who you are. An authority shouldn't be allowed to curtly dictate that a statement is accurate without justification.

Essentially, during the exceedingly normal task to accumulate and estimate corroboration, authorities aren't transcendent oracles who mysteriously take over and finish it. They're more like unavoidable extensions of the one gathering corroboration. For instance, perhaps Fred can't corroborate a statement for himself, but he can communicate with Barney to discover what Barney did to corroborate it. If Barney refuses to deliver an account of what he did, or if the account is as unbelievable as a chat with The Great Gazoo, then Fred isn't obligated to accept Barney's uncorroborated corroboration. But if Fred accepts Barney's account, then Fred hasn't necessarily anointed Barney as an authority (Grand Poobah?). Fred has merely borrowed Barney's plausible corroboration. Mentally, he's permitted Barney—Barney's account, anyway—to represent what he would do if he could corroborate it himself. Fred can generalize from Barney, unless he reasonably supposes that he might encounter incompatible results if he were in Barney's place.

This kind of virtual transference has lots of precedents in mathematical contexts. The logic is applicable to a variety of relationships between amounts. Whenever X is equal to Y, and Y is equal to Z, then X is equal to Z. If Miami's noonday air temperature is hotter than Nashville's, and Nashville's is hotter than Fargo's, then Miami's is hotter than Fargo's. Relationships having this characteristic are transitive. Fortunately, corroboration is transitive much of the time, like it was for Fred and Barney. Realistic examples of transitive corroboration are immensely complex, with one corroboration stacking on another stacking on another, with contradictions and errors sneaking in. Needless to say, Barney's corroboration might be more convincing in conjunction with Betty's and Wilma's matching corroborations. Fred could feel still more confident that he would probably discover indistinguishable corroboration if he could imitate their efforts. Transitive corroboration is akin to a mathematical proof with numerous intermediary steps, which anyone can review whenever they wish. Or it's akin to a chain with numerous, compact, easily visible links.

It's far from original or revolutionary. Yet it clashes with the traditional directions associated with a few problematic topics: to not seek corroboration at all, not seek corroboration in the usual manner, not expect corroboration to either be obvious or to exhibit any testable pattern whatsoever, not presuppose that everyone will or can experience corroboration similarly, not overanalyze or even presume to understand someone else's corroboration, not urge that corroboration be lucid or universal or coherent, and on and on. In short, such directions blatantly ensure that corroboration is fundamentally non-transitive...and therefore unthreatening.

That leaves only the alternative from earlier: enigmatic authorities. When they decline to offer any explanation, the quality of their corroboration is unknown. Else they may offer an explanation, but its details include "methods" that are explicitly individualized...or rare...or ambiguous...or involuntary. Specifically, they may describe an extraordinary message which suddenly appeared in solely their brain. They may narrate an unsettling dream and proceed to clarify what the bizarre images really meant. They may proclaim that they sensed a statement's authenticity via an extraordinary personal ability granted to them by a god. They may assert their god's true opinion on the basis of their intuitive connection with it. They may revise a moral rule by opinionated, subjective, metaphorical reinterpretations of sacred texts. They may glibly argue that their idiosyncratic preferences are superior due to their ineffable wisdom or spiritual accomplishment. They may frame a particularly welcome surprise as a divine signal written just for them.

Their rationales are perfectly opaque to further investigation or refinement by their listeners. The options are to wholly assume or reject the statements/corroborations. Transitive corroboration isn't like that. Its priceless value is its effectiveness at weeding out uncorroborated pretenders and incompetents. It's why the statements of some authorities are genuinely (verifiably) more accurate than the rest. It's why someone can't selfishly choose the "right" authorities/websites/books to corroborate their prejudices about realities—well, they can if they don't mind that their ideas might be partly or entirely fictional (*cough* politicians). It's a deep change of perspective that's harder to recognize than its outward result of the dismissal of faith-beliefs.

Sunday, January 11, 2015

"Hypothesis or not?" is not the instructive question

Lawyer: So does this theory of evolution necessarily mean that there is no God?
Professor Frink: No, of course not...It just says that God is an impotent nothing from nowhere with less power than the Undersecretary of Agriculture, who has very little power in our system. (chuckling Frink noise)      —"The Monkey Suit", The Simpsons
I've noted before that misconceptions clump together. So the stereotype that dissenters of faith-beliefs have a pitiful lack of imagination is often paired with a second: that they narrow-mindedly interpret every statement like a literal scientific hypothesis. "As someone with a broader viewpoint, I don't pretend that everything can be analyzed through scientific means. I recognize that science has its limitations, and perhaps my faith-beliefs do too. That's why I'm unimpressed when critics scoff that my faith-beliefs are 'inferior hypotheses'. To the contrary, my faith-beliefs are significant because the topics aren't restricted by empirical methods. When I'm worshipping or praying, I'm not a scientist measuring outcomes to test a hypothesis. It sometimes seems to me that you people spend a lot of time, especially on the Web, elevating science into an object of adoration. Just as I have my favorite celebrities and lecturers and books, you have yours. I believe what my favorites proclaim, and so do you. I confess that I approach everything through the lens of my faith-beliefs, but you do the same with science. That devotion explains your determination to misconstrue my ideas as hypotheses and mix up your science with my faith-beliefs."

Surprisingly, I sympathize a little with this stereotype's complaint. I don't wish to phrase my opposition as a war between science and religion's competing hypotheses. I'm not eager to verbalize a stark choice between "sides", assign every statement accordingly, and pressure everyone to align themselves with the correct side. The effort to classify statements into domains is a diversion. I prefer to emphasize the question of each statement's credibility. What is its meaningfulness? How is the accuracy of its meaningfulness demonstrated in practice, especially in comparison with the many inaccurate statements which resemble it? What if someone could take the time to put aside an alleged war between ideologies and only try to judge as impartially as possible whether their dear statements could be mistaken?

To reiterate, these inquiries apply to statements from science as well as religion. The more central quarrel isn't about which team is generally "better" and therefore right. We don't follow statements made by scientists purely because science is great and we love science (whatever that means). We're guided by practical definitions of trustworthiness. The process matters. Statements from a science "domain" are trustworthy to the extent that each is backed by a sufficient, public, repeatable process. The pivotal point isn't the mere acknowledgment that science can be persuasively accurate; it's understanding why that is.

In this context, dedication to science is less about allegiance than about a crucial side effect: full appreciation of scientific standards. Those can inform the predominant manner in which someone sifts through the credibility of statements. They can't subject every statement to thorough science itself—exhaustive and meticulous observation, theorizing, experimentation, publication, peer review, etc. In that sense, they can't handle every statement like a hypothesis. Nevertheless, once they can recognize how science laboriously earns trust in its statements, then they can contrast it with the various alternative ways that humans try to inspire trust...such as manipulation or simply the overbearing, blunt command "Trust me!"

The final goal is a paradigm shift. They can stop selectively asking, "Is this statement 'scientific'? Should I act like a scientist when I ponder it?" They can switch to consistently, honestly, fearlessly asking, "Regardless of the domain this wondrous statement comes from, can anyone reasonably explain why I should believe it, and how I could possibly verify its particular details?"

Saturday, October 11, 2014

an illusionary problem

Time for another common and intentional distortion of materialistic naturalism, despite the simplicity of its two short premises. First, anything supernatural doesn't exist...or if it does, then it's utterly undetectable and extraneous. Second, anything that exists is composed of matter, i.e. physical materials/energies/forces.

Before, I commented about the distortions that materialistic naturalism necessarily reduces human life to totally insignificant or predominantly negative. A third is that it carelessly discounts vital inner human experiences as illusions. "Inner experiences" refers to the broad gamut of essentially subjective phenomena: sensations, desires, fears, pleasures, pains, tales, dreams, ideals, identities, social ranks, memories, plans, judgments, inferences, and many more. Despite undeniable immediacy and relevance/salience, these don't appear to be objects composed of matter. These are on the "soul" side of the conventional dichotomy of soul vs. matter. In a single convenient word, these are examples of the category of mentality: mental occurrences which are often at least as vivid as any other reality.

Supposedly, mentality forces the potential followers of materialistic naturalism into an inhuman dilemma. Option one is to be "logically consistent" and concede the unsatisfactory far-fetched notion that mentality's existence is fake. Option two is to concede that mentality exists and be "logically inconsistent" with the premises. Neither option is appealing.

Of course, I reject this dilemma altogether. Consistent with past blog entries, my ongoing viewpoint is that mentality doesn't perfectly fit either of the clumsy classifications. First of all, it can't be purely fake or spiritual because it's too tightly enmeshed with plainly observable bodily matter, such as sensory organs and muscles that serve as its routine external inputs and outputs. And even its own "internal" work coincides exactly with the turbulent activity of still more bodily matter, such as nervous systems—especially brains. In no way is it disconnected or independent from realities composed of matter. To the contrary, unceasing dangers show repeatedly that healthful mentality is frighteningly vulnerable. It's equivalently disrupted or damaged whenever these bundles of matter are. (I once told a story about inflicting analogous destruction on a compact disc to establish that its matter encodes the data.) It can't be a mirage that somehow floats inside or above or around matter.

Nevertheless, I also recognize that a thoroughly materialistic substrate for the category of human mentality doesn't ensure that everything in it corresponds closely to materialistic realities. Specifically, its substrate is an organism. Therefore its form springs out of the same origin as the organism's form: evolution. Naturally, it didn't evolve to act as an excellent all-purpose tool for systematic investigation of microscopic minutiae or faraway galaxies; it evolved to levels of precision and accuracy that are "good enough" for the pertinent details of its organism's environmental niche. Its hastily applied generalizations, shortcuts, biases, and simplifications result from the usual evolutionary pressures for greater efficiency. Overall, its design isn't planned and organized but, well, organic. It showcases the usual tactic of reusing, reshaping, and recombining an existing set of primitive albeit time-tested parts. Its incoming flow of information is confined to "sensors" with ranges limited by cheapness—but admittedly ingenuous within the limits.

In the end, it has approximate projections which compromise between low cost and sufficient correctness. It projects temperatures as harsh visceral relative impressions for swiftly guiding an organism through its surroundings, not as exhaustive quantitative data for distinguishing and interpreting the complicated particle physics responsible. It projects acute injuries as sharp unmistakable bursts of unpleasantness, not as lengthy lists of minute diagnostic facts. On behalf of competitive evolutionary fitness, metaphorical hazy shadows on a cave wall are fine...provided the shadows have adequate fidelity to the source realities. (This point rehashes the past entry on the accuracy of an evolved brain.) If any "distorted" or "highly processed" or "incomplete" representation qualifies as an illusion, then illusion is all around; humans encounter and understand realities through the lens of mentality. The term "illusion" becomes worthless. Mentality is packed with illusions according to that overly broad definition.

Frankly, the definition of illusion doesn't need to be that uselessly broad to capture compelling examples. A narrower definition of blatant fiction will do, because human mentality isn't bounded by unoriginal reporting. It can be unreal, i.e. counterfactual. It can fantasize falsehoods. And it can ignorantly digest falsehoods just like realities, either accidentally or through malicious exploitation of its predictable inborn weaknesses. Yet falsehoods come in countless kinds that enrich lives too. Nonrepresentational works are priceless cultural artifacts. Novel inventions start as imaginings.

Thus some examples of mentality might have little resemblance to verified material stuff, while mentality itself nonetheless continues to be the result of material stuff. For instance, a particular house is the occasional setting of my dreams. It's an old wooden three-story house. Its exterior is too small to plausibly contain its numerous trinket-filled rooms and secret passageways. Since this house is in my dreams, it's not like a house that consumes space at a location. It's not a structure with a history or a future. It doesn't interact with anything else at all. I never remember sensing it while I'm awake and alert. It fails the mundane confirmations that sift realities from illusions—it's an illusion of the blatant fiction variety.

Regardless of its fictitiousness, this house illusion is actually embedded in the precious matter of a non-illusory object I like to call myself. That matter is arranged in a way that leads to the occasional, er, reconstruction of the house during sleep. The reconstruction is a process, and the house's lack of confirmed existence is incidental to the process as it's operating. A process that produces abundant distinct examples of realistic mentality can certainly produce examples of lesser realism as well. A mentality maker is a potential illusion maker. Suppose that the active matter is like a keyboard, and mentality is the music it emits. Then waking mentality is the tune played when assorted waking influences press the matter "keys". I drive to my residence, rays of light and sound waves reach my eyes and ears, and I perceive my familiar residence in my mentality. Cycles of dream-sleep, or any altered state, have differing influences pressing differing keys in abnormal patterns. Why couldn't the mentality "tune" be different? Why couldn't the tune be unusual....or discordant?

Ultimately I persist in following materialistic naturalism, notwithstanding my unpredictable dream visits to a counterfeit house. It and its fellow examples of mentality are matter masquerading as non-matter. Humans face everything via the sole point of view they have and cannot escape: their individual instance of mentality. But mentality is an imperfect exceedingly complex effect of the work of evolved but ordinary error-prone matter, so it's susceptible to wildly varying amounts of authenticity. That's why mentality or thoughts work best in a triangle of checks and balances along with actions and objects.

If, like earlier, projection can be a flawed but useful metaphor for mentality, then the house is an image on a metaphorical filmstrip (reminds me of a Sunday installment of Calvin and Hobbes that played with the idea that a weird dream is a sloppily spliced film). The projector, filmstrip, and the image on the filmstrip are all matter. Yet the image might be properly considered an illusion, depending on whether it came straight from a camera placed in front a house (eyes), a skilled impressionist painter (dreams), or the in-between case of a photograph that has been retouched and/or shoddily copied (aged memories). Materialistic naturalism doesn't blindly invalidate mentality as a whole. But it could cast reasonable doubt on mentality's customary overconfidence and self-importance: "I see things this way in my soul, and my soul outranks and thereby reinterprets every other source of information."  

Monday, August 25, 2014

let causality be causality

Groups defined by beliefs love their memorable catchphrases, which function as quick summaries of their groupthink. The inevitable downside—or upside, depending on the speaker's mischievousness—is irritating an outsider who tries to have a straightforward conversation. They're understandably frustrated by repetitive replies composed out of trite proverbs and smug slogans.

One catchphrase among many is "Let God be God." Generally speaking, it's a reminder of the overall attitude and conduct demanded by the speaker's god: submission. In context, it could mean "Stop being afraid or anxious about risks, because our god is omnipotent and caring." (And don't think about the countless times when it plainly permitted the worst.) Or "Stop making moral decisions through your own conscience, because our age-old teachings are superior." (And don't think about the dilemmas or concerns that aren't addressed.) Or "Stop wondering whether our god merits or craves your unending adoration, because it's responsible for making a nurturing planet and filling it with life." (And don't think about Earth's mass extinctions or the vast unlivable bulk of the universe.) Or "Stop obsessing over natural explanations for mystifying phenomena, because our god's ways exceed human comprehension." (And don't think about the historical trend of abandoning supernatural theories time after time.)

If that catchphrase has a counterpart in the stance of materialistic naturalism, then perhaps one candidate is "Let causality be causality." Metaphysical quibbles aside, here causality shall refer to the relationship between physical states of materials at differing times. Causality is the well-justified inference that a material state at later time Y is the way it is due to a related material state at earlier time X. Furthermore, due to the unique details of the state at time X, the state at time Y is not like many other hypothetical alternatives. Those would have required other hypothetical alternatives at time X. Causality is the pattern of tight sequential connection between distinct physical states, from predecessors to successors.

Surprisingly, this minimal proposition has competition. To start with, some may state, "Stuff just happens." Some may say with slightly less fatalism, "Irresistible nonphysical beings keep everything running normally moment by moment." Some may say with more optimism, "The universe is perpetually 'nudged' toward a grand purpose by a trustworthy overseer." Some may opt for the vaguer, "The universe was/is destined somehow to accomplish prearranged outcomes in my life that I call 'Fate'."

On the other hand, they may sprinkle in some science with, "Evolution deliberately molded life into pinnacles of elaborate, intelligent, self-aware creatures." The problem, of course, is that natural selection doesn't work like that. It's emphatically not separate from causality. It doesn't engineer with foresight. It's not a sculptor who gazes at a featureless stone block, envisions the final statue, then chips away the rest. The evolved organisms are the ones which more effectively survived and reproduced. Opinions about the progress of evolution are superimposed on the accretion of adaptations...and exaptations.

By contrast, when someone lets causality be causality, they "permit" current physical realities to be effects of past physical causes. Rather than symbols or clues about something else altogether, realities simply are. The present is what it is because the past was what it was. Realities don't arrive in prepackaged categories such as punishments, rewards, trade-offs, messages, omens, flukes. Although humans compulsively frame their interpretation of events with narratives of widely varying credibility, the events themselves aren't caused by human narratives. How could they, considering that the human narratives often aren't contrived until long after the unanticipated events?

Therefore, when someone lets causality be causality, they stop futilely dictating that events always conform to the preconceptions of their narratives. While things can be expected to be effects of causes, things cannot be expected to always "make sense" in every human narrative. Mere human objections don't overrule caused realities. Clearly this acknowledgment is both scary and freeing if taken seriously. The scary part is affirming that realities are untamed by narratives. The freeing part is no longer feeling obligated to fixate attention or feelings on the inevitable discrepancy between discovered realities and the narrative that was computed beforehand. (Some readers may notice a resemblance to the Buddhist technique of experiencing the present moment without prejudice.)

But completely disregarding the discrepancy is an unreasonable waste. It can furnish expensively acquired feedback for refining the mistaken narrative. By definition, a narrative is more accurate if it needs fewer feedback changes. Regardless, a permanently unchanging narrative is paradoxically suspect. Perhaps it never changes purely as a matter of policy, in which it praises its own flawlessness and forbids refinement of itself. The obvious defect is that it could be deceptively self-serving. When it indiscriminately deflects the smallest hint of faults, the narrative could in fact be faulty, for nobody can check!

However, to let causality be causality isn't to totally abandon all narratives. Not all narratives are in conflict with it. To the contrary, a narrative could implicitly embrace and reinforce it. For instance, according to a central narrative of materialistic naturalism, realities have essentially unified substances and behaviors. And the underlying unity accounts for causality. Since things are enough alike, things are able to constantly cause changes in one another. Unbroken unity is linked to unbroken causality—hereafter named unity/causality. The mass of solid Thing 1 isn't essentially dissimilar from the mass of solid Thing 2, so the gravity of Thing 1 partially causes the motion of Thing 2. After a collision, solid Thing 1 could cause solid Thing 2 to crumple, not pass through like a ghost in a fictional narrative (or a neutrino's probable journey?).

Forces come from interactions between things. In this aspect, to let causality be causality is to realize that things act as interacting components of whole "physical systems". Causality itself is the relentlessly successful proof of this truism. (Some readers may notice a resemblance to the Buddhist concept "dependent arising".) Contrary to common criticism, unity/causality isn't the absolute repudiation of "something larger than oneself". Instead, the Larger Something is more complicated, turbulent, subtle, diffuse, and impersonal than the typical proposals.

That Larger Something is admittedly abstract and the full description employs baffling mathematical formulas. Yet unity/causality also has palpable ramifications at the normal scale of human thought. At that scale, for a variety of useful purposes, humans customarily draw mental boundaries between things based on noteworthy characteristics. Nevertheless, unity/causality often violates these familiar boundaries. Fittingly, the most personal boundary it violates is the boundary around the human person, i.e. the self who observes and explains. The substance and behaviors which constitute the self are neither isolated nor special; the self is one of the earlier mentioned components of the Larger Something held together by unity/causality. For example, the self cannot create new quantities of energy—it must scavenge replacement energy from outside itself. And objects, such as Thing 1 and Thing 2 from earlier, routinely cause changes to it.

Few would reject that. Assuming they have managed to live long enough, everyone knows firsthand that their selves aren't royally privileged to override unity/causality. If they were, then the portion called the "body" wouldn't be damaged so frequently by involuntary external causes. Everyone should quickly admit that they can't face realities from an untouchable vantage point. Still, applying unity/causality to the self with unblinking consistency goes beyond that admission. To most consistently let causality be causality is to assert that the entire self is thoroughly intertwined with it, top to bottom, inside and outside.

Again, anyone who's encountered diverse personal perspectives and backgrounds could probably agree that everybody's mindsets have been demonstrably shaped, trained, caused. But they may be slower to agree that all events "of" or "inside" the self are regular albeit complex fragile specimens of unity/causality. The self's thoughts change because everything changes. The self's temper fluctuates because everything fluctuates. The self is influenced by past experiences because everything is influenced by past impacts with surrounding things. In short, the self is a highly unusual assemblage, but it isn't an exception to unity/causality. (Some readers may notice a resemblance to the Buddhist concept "no-self".)

Unfortunately, this self-portrait could appear, well, dispiriting. If someone is the effect of causes which they cannot command, then aren't they under constant coercion? Isn't it better for them to ignore this deduction and choose to believe otherwise? No, it isn't. Belief in general shouldn't be "chosen". Honest beliefs should result from candid judgments based on known findings and logical coherency, not based on willful denial. Just insisting that something is inaccurate doesn't transform its testable degree of accuracy. Wishing for the self to not be linked to unity/causality is akin to coping with unpleasant situations by shutting one's eyes.

The sensible approach is almost the opposite: to closely examine the numerous causes which sway the self. With eyes wide open, someone may realistically trace a motive or habit. Then they may grasp both the "message" behind it and that message's amount of irrationality (feeling an irrational motive is alright but mindlessly complying with it could be disastrous!). If a driver wants to avoid repeating a blowout in the future, why shouldn't they confess that the road could be having effects on their tires? Why should they continue to think that their tires are invincible? "I say that my tires are unaffected by this road, so I can drive here every afternoon without worry. All these recurring spontaneous blowouts are an irritating coincidence, though."

Gathering lots of authentic information about causes is prudent. It's an indispensable prelude to savvy active participation in unity/causality. Everyone isn't converted into a powerless spectator. To let causality be causality is to appreciate that despite each thing absorbing a multitude of effects, it nonetheless emits a multitude of causes too. If Thing 1 causes Thing 2 to fall, then Thing 2 might in turn cause Thing 3 to flatten. Human intelligent awareness enables a far more intriguing case. Humans can (imperfectly) compute wide ranges of options and the effects of those options. Moreover, they can (imperfectly) decode the causes which are manipulating them and everything around them. Finally, they can decide the causes they shall enact in order to yield the effects they want.

Unity/causality doesn't force humans to be victims only. It reflects the consequences of their actions. It can be selectively "bent" to do what they want. Its nonnegotiable condition is that it will operate in accordance with its usual rules, so productively bending it requires detailed understanding and obedience of its intricacies. A product chemical won't be the effect unless the chemist has the skill, and the reactant chemicals, and the measurement/collection/containment instruments, and the chemist carries out the appropriate steps by moving their body—whether they perform the labor or activate it in an automated form or tell a postdoc to do it. A member of a society may recognize and contemplate their society's myriad effects on them, then decide to not propagate one or more onto anyone else. Addicts can identify and avoid old "triggers" that cause the self to relapse; in particular they can decide to revise their routines and/or replace their hobbies.

That said, the potential to collaborate with unity/causality certainly has firm limits. In the end, to let causality be causality is to discern that simulations of the Larger Something aren't always feasible. Ideally, humans can make an accurate analysis by separating, sampling, simplifying, and modeling the relevant data of sections of the Larger Something. Sometimes in practice, the minimum data for an accurate analysis is too extensive. Perhaps a section cannot be analyzed independently. Perhaps a section itself contains a multitude of variant components, and the individual variances are too important for an "average" to substitute for each. In the worst case, the upshot is that a high fidelity simulation would need to include almost identical representations of almost every detail of the source reality.

The source reality's tiny causes could be amplified by working together. Then the cumulative effects abruptly cross resistant thresholds and "cascade" across the system by abruptly crossing farther resistant thresholds as well. Thus the simulation's projection could fail spectacularly...if it excluded the one tiny bit that was amplified and pivotal in retrospect. Unity/causality isn't constrained by quaint human preferences for tidy, neatly divisible factors. It's not fine-tuned to facilitate a smooth route to comprehensive knowledge or instant solutions. To let causality be causality is to confront and adjust realities on their terms.

Sunday, August 03, 2014

uncertainty is a participation ribbon

Without a doubt, I knew beforehand that I wouldn't agree with every point in Frank Schaeffer's mishmash, Why I am an Atheist Who Believes in God: How to give love, create beauty and find peace. And my expectations were met. My reactions are as inconsistent as the ideas expressed. Like Schaeffer, I dismissed my faith-beliefs without considering them totally worthless. We're in agreement that the Bible is packed with factual inaccuracies and antiquated moralities. We accept the statements of scientific consensus. We reject the claim that contemporary society should regress to the cultural mores advocated in the Bible. We may have similar political views, but his blogging is obviously much more politically focused.

Thereafter the philosophical, or perhaps psychological, differences start to pile up. I dismissed my faith-belief's activities some time after dismissing the corresponding faith-beliefs. Such activities would now clash with my innermost thoughts. I don't have the same old need or desire to continue them. In fact, almost any other category of activity seems more valuable and enjoyable to me. But Schaeffer matter-of-factly confesses that he prays and attends religious services, due to both ingrained compulsion and ongoing appreciation for the experiences' flavor and good intentions. In one section he lightheartedly compares them to bowling regularly.

That's fine with me. He can spend his personal time in whatever frivolous ways he likes, assuming of course he isn't harming anyone else. Likewise, one's chosen identity isn't thrown into actual contradiction by singing Christmas carols, or LARPing, or reenacting Civil War battles, or reciting the dialogue of Puck. The problems only start when someone fails to isolate these fanciful roles within a sharply delimited context...

I might even be glad that he routinely performs religious activities, if the simple effect is encouraging kindness and the contemplation of life through greater perspective. His book more or less portrays "Christ" not as a man or god but as a kind of storied avatar of concepts such as broad inclusiveness, equality, rejection of biblical literalism, compassion, and anything else Schaeffer approves. Hence he suggests that Scandinavian countries merit the label of "Christian", and the Enlightenment qualifies as an implicit "heresy of Christianity".

I suppose that I can see his point. However, the semantic gymnastics strike me as fruitless. Sure, someone certainly could "take back" the myth of Christ from traditional churches, and refashion it in order to link it to new things. But what does that gain? Who cares about ensuring that link? Why not allow an upstart to be good without "christening" it, so to speak? Must this be another case of "meet the new boss, same as the old boss"?

Still, the gap between our differing approaches to religious activities is less extensive than the chasm between our differing emphases on uncertainty—or "mystery" if the speaker wishes to sound wise and impressive. My inclination is to compare uncertainty to a participation ribbon. When I was a young child, participation ribbons were part of Field Day: an annual school event held outdoors. Field Day included quick individual competitions in which the top three received a designated (cheap) ribbon. Nevertheless, everyone in the class who was present received at least one ribbon for their participation in Field Day. Participation itself was an achievement.

To a similar degree, acknowledging the uncertainty of one's current knowledge is the achievement of successfully showing up for the honest struggle to obtain accurate ideas. The recognition of possible uncertainty is akin to the steps before the first step of the Field Day's dash competition (a race so short that it was almost absurd). It indicates the participant's willingness to seriously judge the boundaries of their knowledge.

The opposite isn't confidence but thin-skinned arrogance: "My knowledge is so infallible that absolutely no pragmatic action needs to be taken, whether to 'verify' its implications or to seek out superior alternatives to it." Someone with exactly zero uncertainty is someone who cannot imagine improving their knowledge, so they don't participate meaningfully in the struggle to obtain accurate ideas. They're not lining up at the starting line for the dash. Rather, they sit on the side and brag that they would circle the school building five times if they would demean themselves to testing their speed in the dash. This is the state of mind which knows the answer with certainty before expending any mundane effort. It's generally called "fundamentalist" by the irreligious, although it surely isn't confined to self-identified Fundamentalists.

The comparison underlines several aspects. First, like a participation ribbon, uncertainty isn't a pursued prize. It's not an aim. It's utterly normal and unremarkable. It's more like a periodically performed measurement that constantly fluctuates according to specific justifications. Uncertainty is why statistical analysis matters and why verifications should be repeatable; otherwise, one or two checks could be flukes. It's why someone concedes that their knowledge is possibly revisable. It's why a credible experimenter attempts to discern and publicly disclose the weaknesses in their own experimental studies. Once someone pinpoints their sources of uncertainty, they can speculate about circumstances that could reduce uncertainty and enforce revisions to knowledge. Nobody needs to be proud of being uncertain. Nobody needs to speak as if the existence of uncertainty produces definite conclusions in response. While it's an essential prerequisite to placing knowledge in realistic context, uncertainty isn't precious by itself.

Second, like a participation ribbon, uncertainty isn't an endpoint. It's not a destination. It's not the finish line of the dash. It's not a signal that someone should immediately give up expanding their knowledge. It's a clue to what someone should do next. They can't eliminate it all at once but they can gnaw at it bit by bit. On the other hand, one of the hallmarks of realistic answers is the tendency to lead to all-new sets of questions. The work to reduce uncertainty might result in further uncertainties which are different and surprising. That's still progress. Now the searcher has better reasons to be more sure about the prior idea. Novel uncertainty doesn't cause frustration at not capturing "the final truth". It's an invigorating invitation to keep moving.

Third, like a participation ribbon, uncertainty doesn't demolish the notion of winners and losers. Everyone's participation in the dash doesn't imply that they will complete it simultaneously. As I keep reiterating, uncertainty isn't absolute. It's not a poison. The smallest speck of it doesn't ruin trustworthiness or erase past advances. Its proper use isn't to shut down debate. It doesn't grant equal legitimacy to every half-baked conjecture. It's not a rationale for saying, "I'm uncertain and you're uncertain, so we're both total fools who shouldn't ask each other how we defend our positions."

To the contrary, uncertainty is yet another distinguishing mark. It's directly tied to how the position was verified. If one participant's beliefs seemingly derive from their moods, then they would say that uncertainty springs from the wild oscillations between their moods. That variety of uncertainty is hardly equivalent to the ever-popular variety of mathematically precise and limited uncertainty within quantum mechanics, for instance. Wave-particle duality and Planck's constant don't somehow support the dangerous proposition that all human beliefs have been proven identically useless. Nor does it support the bizarre fantasy that human souls can remake realities by intentionally collapsing wave functions into a desired quantity.

Therefore, uncertainty of a belief isn't tied to the particular way that someone personally encountered the belief. Uncertainty is gauged by the belief's underlying chain or web of positive verifications. For example, I readily declare that I was never personally taught to believe in a Cosmic Turtle. Regardless, I don't dismiss it for the sole reason that I was never personally taught it. I dismiss it because I'm not convinced by a chain or web of positive verifications underlying it. My disbelief isn't wholly dependent on the "narratives" of my upbringing or anyone else's. Am I "uncertain" about the Cosmic Turtle to the extent that I cannot say that its absence of detection thus far forbids its (hidden) existence? Well, yes. And its current status is not too dissimilar from indetectable contemporary "mystery" versions of gods. In short, if folks like Schaeffer claim that I qualify as a "fundamentalist atheist" because my deep uncertainty about Great Theological Off-stage Mysteries leads me to dismiss them, then by their standard they qualify as "fundamentalist Cosmic Turtle deniers". Nobody should care whether someone was personally acclimated to this or that set of ideas. In any case, the more relevant question is how one's ideas are distinctively supported, not how they heard about them. Familiarity or unfamiliarity is not enough to either verify or falsify any specific belief.

Ultimately, disagreements about uncertainty aside, I don't have serious objections to much of the book. I can imagine far worse fates than vast populations acting like "atheists who believe in a god"...a god that does nothing more than embody carefully selected ethical ideals.

Saturday, July 26, 2014

the triviality of human significance

Like a cockroach, propaganda can survive. That category includes some of the undying arguments against materialistic naturalism. As I've said previously here and here, I define materialistic naturalism as a philosophical stance that contains two overlapping statements. First, if anything supernatural exists, then it is as equally indetectable and irrelevant as if it's nonexistent. Second, anything that does exist originates in material/physical stuff. I write "stance" because I recognize that these sweeping propositions aren't exhaustively proven like facts. Rather, the propositions match known facts and additionally presume that unknown facts will match just as closely.

Today's propaganda cockroach is the argument that materialistic naturalism enforces a total loss of human significance. It sounds like, "Without supernatural entities or metaphysical factors, nothing grants humans greater significance than anything else. Since a single atom cannot be significant, humans composed solely of atoms cannot be significant. Since significance is bestowed by a qualified external judge, humans cannot be bestowed with significance in the absence of supernatural entities. Since something must be relatively central and influential in the universe in order to be significant, humans cannot be significant unless physical laws assign those exceptional qualities to humans. Since something must be relatively long-lived or imperishable in order to be significant, the material of mortal human lives cannot be significant whenever the scale of the universe is considered massively old by comparison. Since significance is often an intangible feeling, tangible things cannot be the sole background of significance." And so on.

Although this overcomplicated line of reasoning might have enticed me once, I have less patience now for the "question" of human significance in materialistic naturalism. In my current perspective, its answer verges on "trivial", similar to the simplistic trivial solutions of a mathematical problem or proof. For instance, the subsets of set Q are the sets only of members that are also members of Q. The problem of finding the subsets of Q that fit particular criteria can be...hard sometimes (in fact, theoretically proven to not presently have a uniformly feasible general solution). But Q is a trivial subset of Q; after all, Q is a set of only members that are "also" members of Q.

Essentially, the trivial pragmatic reason behind the greater significance of the tiny materials within humans is that the materials are within humans. And the trivial reason humans are so significant to us is because we're human. My head is significant to me because it's my head. My brain is significant to me because it's my brain in my head. My brain cells are significant to me because my brain cells are in my brain in my head. Comparable "logic" applies to humans other than me. Their brain cells are significant to me because their brain cells are in their brains in their heads. Certainly I would acknowledge the awful significance of their tiny brain cells functioning worse, perhaps due to a degenerative disease.  

Furthermore, this trivial definition of human significance isn't in conflict with the finite realities of the small space-time position and impact of human lives. It's not hampered by confinement to a single off-center planet in a single off-center solar system in a single off-center galaxy. Nor is it hampered by the limited number of decades between birth and death. The more insightful question isn't whether grander knowledge of the whole universe reduces significance. It's "Why would anyone think it reduces significance?" Why was there such a staggeringly disproportionate estimate of significance to start with? If significance to the whole universe shrinks as the estimate grows more accurate, then nothing is "lost" except the former believability of the outlandish fictional estimate. Humans have only been "demoted" from a noble rank that was never theirs in actuality anyway. How shocking it is to not assume that humans aren't the pinnacle or purpose of the whole universe.

Even so, I can appreciate one of the possible motives for seeking to expand significance and anchor it firmly in external realities: objectivity. I admit that the opposite strategy is more subjective in nature. If significance is derived from human judgment, then it's probably bounded correspondingly by provincial human concerns. And given the multitude of humans and human concerns, it will probably be constructed in a multitude of varieties. But the quest for objective significance is fruitless. Perfectly objective significance would be somewhat detached from human standards. Yet if significance were somewhat detached from human standards, then humans wouldn't be likely satisfied with it. Just as one human's measurement of significance might not be satisfying to another, neither might an inhuman measurement of significance. To name the obvious example, someone might reasonably disagree with "objective divine writ" which calls them a negligible member of a permanently insignificant out-group...

In contrast, I don't understand a second possible motive for rejecting materialistic naturalism: the absurd poetic demand that a thing's explanation must have identical human significance as the thing itself. Depending on the context and goal, either the thing or the explanation might have more applicable significance. The activity of reading about the Krebs cycle shouldn't need to feel energetic for the information to be acceptable. Memorization of the chemical bonds of serotonin doesn't need to alleviate depression. The numerical magnitude of the neuron threshold potential doesn't need to trigger memories of Grandmother. Accurate representations of low-level complex realities don't somehow invalidate or replace the significance of human-level experiences. The explanation's significance is independent. Its amount of "mystery" isn't vital to preserving the experience's significance. Detailed studies, consistent with materialistic naturalism, don't diminish it. Likewise, shadowy conjectures, inconsistent with materialistic naturalism, merely decorate it. A decisive event in someone's personal history affected them vividly and had long-lasting ramifications. That's why they weighed its significance so heavily, no matter the alleged comparative contributions of natural processes or ineffable Fate, no matter the individual's vocation of scientist or clergy.

That objection is related to the mistaken assertion that someone who follows materialistic naturalism cannot claim that human significance is an important question. I don't wish to give that impression. I would never state that the question's trivial difficulty is accompanied by trivial importance. Identifying precise manifestations of human significance is a valuable undertaking. I think it's commendable to deliberately evaluate one's own wide-ranging effects, not from an unearthly vantage point, but through reference to one's own powers and chosen ideals. Why is a chosen ideal a worthy benchmark of significance? The answer is trivially obtained: it's worthy by whatever rationales that caused it to be chosen, of course. It's meaningful via straightforward ties to the realities of specific human experiences. It's not dependent on stupendous otherworldly significance, grounded in mystical afterlives and deities. Why does it need to be? Why isn't it enough?