Saturday, December 09, 2017

hunting for license

Followers of materialistic naturalism like myself have a reputation of scoffing at wishful thinking. We're pictured as having an unhealthy obsession on bare inhuman facts. Despite that, I'm well aware that one of my own ideals is closer to a wish than to a reality a lot of the time. I refer to the recommended path to accurate thoughts. It starts with collecting all available leads. After the hard work of collection comes the tricky task of dispassionately sorting and filtering the leads by trustworthiness. Once sorting and filtering are done, then the more trustworthy leads form the criteria for judging among candidate ideas. The sequence is akin to estimating a landmark's position after taking compass bearings from three separate locations, not after a single impromptu guess by eye.

If I'm reluctantly conceding that this advice isn't always put in practice, then why not? What are people doing in its place? We're all creatures who by nature avoid pain and loss, including the pain of alarming ideas and the loss of ideas that we hold. That's why many people substitute the less risky objective of seeking out the leads which would permit them to retain the ideas they cherish for reasons besides accuracy. They're after information and arguments to give them license to stay put. As commentators have remarked again and again, the longest-lasting fans of the topics or debates of religious apologetics (or the dissenting counter-apologetics) are intent on cheering their side—not on deciding to switch anytime soon.

Their word choices stand out. They ask whether they can believe X without losing respect, or whether they must believe Y. Moreover, they significantly don't ask which idea connects up to the leads with less effort than the others...or which idea introduces fewer shaky speculations than the others. The crux is that their darling idea isn't absurd and it also isn't manifestly contrary to one or more indisputable discoveries. They may still care a bit about its accuracy relative to competing ideas, but by comparison this quality is an afterthought. They're gratified as long as the idea's odds exceed a passable threshold. They can honestly envision that the idea could be valid in some sense. The metaphor isn't searching for a loophole but snatching up every usable shim to fix the loose fit of their idea within the niches that need filling. The undisguised haphazardness of it at least ensures that it's adaptable and versatile.

Of course, at root it's a product of compromise for people who are trying to navigate all the forces which push and pull at them. It soothes them about the lower priority they're consciously or unconsciously assigning to accuracy. By scraping together an adequate collection of leads to make an idea viable, they're informing everyone, including themselves, that their selection of the idea isn't unreasonable. If the pursuit of authenticity were a game, they'd be insisting that their idea isn't out of bounds.

Irritatingly, one strange outcome of their half-cured ignorance might be an overreaction of blind confidence. The brashest of them might be moved to declare that their idea is more than just allowable; it's a first-rate "alternative choice" that's as good or better than any other. By transforming it to a matter of equal preference, they can be shameless about indulging their preference.

In isolated cases it really could be. But the relevant point is that, successful or not, the strategy they used was backwards. They didn't go looking honestly for leads to the top idea. All they wanted was greater license to keep the idea they were already carrying with them while they looked. In effect, thinking in these terms motivates more than a mere bias toward noticing confirmations of their idea: they're sent on the hunt. Needless to say, they can't expect anybody else to be as enchanted by their hunt's predictable findings.

Friday, December 01, 2017

trading posts

From time to time I'm reminded that it's misguided to ask, "Why is it that faulty or unverified ideas not only survive but thrive?" In my opinion the opposite question is more appropriate: "What has prevented, or could prevent, faulty or unverified ideas from achieving even more dominance?" The latter recognizes the appalling history of errors which have seized entire populations in various eras. The errors sprout up in all the corners of culture. The sight of substandard ideas spreading like weeds isn't exceptional; it's ageless.

The explanations are ageless too. One of these is that the ideas could be sitting atop a heaping pile of spellbinding stories of dubious origin. After a story has drawn its audience toward an unreal idea, it doesn't vanish. Its audience reproduces it. It might mutate in the process. When it's packaged with similar stories, its effect multiplies. Circles of people go on to trade the stories eagerly, because when they do they're trading mutual reassurance that they're right. There's a balance at work. Possessing uncommon knowledge is thrilling, especially when it's said to be both highly valuable and "suppressed". But the impression that at least a few others subscribe to the same arcane knowledge shields each of them from the doubt that they're just fantasizing or committing an individual mistake.

As is typical, the internet intensifies this pattern of human behavior rather than creates it out of nothing. It's a newer medium, albeit with a tremendous gain in convenience and visibility compared to older forms. In the past, the feat of trading relatively obscure stories depended on printed material such as newsletters or pamphlets or rare books. Or it happened gradually through a crooked pipeline of conversations that probably distorted the "facts" more and more during the trip. Or it was on the agenda of meetings quietly organized by an interested group that had to already exist in the local area.

Or a story could be posted up somewhere for its intended audience. This method especially benefited from the internet's speed and wide availability. Obviously, a powerful computer (or huge data center full of computers) with a memorable internet address can provide a spectacular setting for modern electronic forms of these posted stories—which have ended up being called "posts". Numerous worldwide devices can connect to the published address to rapidly store and retrieve the posts. Whether the specific technology is a bulletin board system, a newsgroup, an online discussion forum, or a social media website, the result is comparable. Whole virtual communities coalesce around exchanging posts.

Undoubtedly, these innovations have had favorable uses. But these also have supplied potent petri dishes for hosting the buildup of the deceptive, apocryphal stories that boost awful ideas. So when people perform "internet research", they may trip over these. The endlessly depressing trend is for the story that's more irresistible, and/or more smoothly comprehended, to be duplicated and scattered farther. Unfortunately that story won't necessarily be the most factual one. After all, a story that isn't held back by accuracy and complex nuance is freer to take any enticing shape.

It might be finely-tuned in its own way, though. A false story that demands almost no mental effort from its audience might provoke disbelief at its easiness. It would have too close of a resemblance to a superficial guess. That's avoided by a false story that demands a nugget of sustained but not strenuous mental effort—or a touch of inspired cleverness. It more effectively gives its audience a chance of feeling smug that now they're part of the group who're smart and informed enough to know better.

I wish I knew a quick remedy for this disadvantage. I guess the superior ideas need to have superior stories, or a mass refusal to tolerate stores with sloppy substantiation needs to develop. Until then the unwary public will be as vulnerable as they've ever been to the zealous self-important traders of hollow stories and to the fictional "ideas" the stories promote.

Monday, November 27, 2017

no stretching necessary

I'm often miffed at the suggestion that my stance is particularly extremist. According to some critics, materialistic naturalism is an excessive interpretation of reality. It's a stretch. They submit that its overall judgment is reaching far beyond the tally of what's known and what isn't. It's displaying its own type of dogmatism: clinging too rigidly to stark principles. It's declaring a larger pattern that isn't really present.

This disagreement about who's stretching further is a useful clue about fundamental differences. It highlights the split in our assumptions about the proper neutral starting point of belief. When two stances have little overlap, the temptation is to hold up a weak mixture of the two as the obvious default, i.e. the most minimal and impartial position. Like in two-party politics, the midpoint between the two poles receives the label of "moderate center". As the poles change, the center becomes something else. I gladly admit that according to their perceived continuum of beliefs about the supernatural domain's level of activity and importance, mine lies closer to one of the clear-headed ends than to something mushier.

From my standpoint, their first mistake is imposing the wrong continuum. Applying it to me is like saying that the length of my fingernails is minuscule next to a meter stick. Although a continuum can be drawn to categorize me as having an extreme stance, the attempt doesn't deserve attention until the continuum itself is defended. It's not enough to decree that people more or less radical depending on how much their stance differs from yours. For a subjective estimation to be worth hearing, the relative basis for the estimation needs to be laid out fairly. Part of that is acknowledging the large role played by culture. Rarities in one culture might be commonplace in a second. Many times, perhaps most of the time, the customary continuum of "normal" belief isn't universal but a reflection of the setting.

The good news is that the preferable alternative isn't strange or complicated: it's the same kind of continuum of belief that works so wonderfully in myriad contexts besides this one. This kind begins with the stance that people have before they've heard of the belief. This beginning stance is perfect uncertainty about the belief's accuracy. It's at 0, neither positive nor negative.

Yet until the scales are tipped for high-quality reasons, the advisable approach is to continue thinking and acting on the likely guess that the belief isn't to be trusted. This is a superb tactic simply because unreliable beliefs are vastly cheaper to develop than reliable beliefs—the unreliable should be expected to outnumber the reliable. In the very beginning, before more is known, to make an unjustified leap to a strongly supportive stance about the belief...would be a stretch.

That's only one point of the continuum. But the rule for the rest is no more exotic: the intensity of belief corresponds to the intensity of validation. Information raises or lowers the willingness to "bet" on the belief to serve some purpose. The information accumulates, which implies that new information doesn't necessarily replace older information. Each time, it's important to ask whether the belief is significantly better at explaining information than mere statistical coincidence.

A popular term for this kind of continuum is Bayesian. Or, to borrow a favorite turn of phrase, it could be called the kind that's focused on fooling ourselves less. It's a contrast to the myth-making kind of continuum of belief in which stances are chosen based on the familiar belief's inherent appeal or its cultural dominance to each individual believer. At the core, Bayesian continua are built around the ideal of studiously not overdoing acceptance of a belief. This is why it's a futile taunt to characterize a Bayesian continuum stance as a fanatical overreach. The continuum of how alien a stance feels to someone is entirely separate. For that matter, when the stance is more unsettling largely as a result of staying strictly in line with the genuinely verified details, the reaction it provokes might be an encouraging sign that pleasing the crowd isn't its primary goal. If someone is already frequently looking down to check that they're on solid ground, they won't be disturbed by the toothless charge that they've stepped onto someone else's definition of thin ice.

The accusation has another self-defeating problem: the absolutist closed-mindedness that it's attacking isn't applicable to most people of my stance. All that's needed is to actually listen to what we say when we're politely asked. Generally we're more than willing to admit the distinction between impossible and improbable beliefs. We endorse and follow materialistic naturalism, but we simultaneously allow that it could be missing something frustratingly elusive. We could be shown that it needs to be supplemented by a secondary stance.

But by now, after a one-sided history of countless missed opportunities for supernatural stuff to make itself plain, and steadily shrinking gaps for it to be hiding in, the rationale would need to be stunningly dramatic. It would need to be something that hasn't come along yet, such as a god finally speaking distinctly to the masses of planet Earth. Corrupted texts, personal intuitions, and glory-seeking prophets don't suffice. The common caricatures of us are off-target. We aren't unmovable. We'd simply need a lot more convincing to prod us along our Bayesian continua. (The debatable exceptions are hypothetical beings defined by mutually contradictory combinations of characteristics; logic fights against the existence of these beings.)

There is a last amusing aspect of the squabble over the notion that materialistic naturalism stretches too far to obtain its conclusions. Regardless of my emphatic feelings that intermediate stances aren't the closest match to the hard-earned knowledge that's available, I'm sure that I'm not alone in preferring that more people followed these in place of some others. While I disagree that their beliefs are more plausible than mine, deists/pantheists/something-ists upset me less than the groups whose gods are said to be obsessed with interfering in human lives. I wouldn't be crushed if the single consequence of losing were more people drifting to the supposed "middle ground" of inoffensive and mostly empty supernatural concepts.

Because outside of staged two-person philosophical dialogues, it's a short-sighted strategy to argue that my stance presumes too much. It'd only succeed in flipping people from mine to the arguer's stance after they added the laughable claims that theirs somehow presumes less than mine, and that it presumes less than deism/pantheism/something-ism...

Saturday, November 11, 2017

supertankers and Segways

Back when my frame of mind was incorrect yet complacent, the crucial factor wasn't impaired/lazy intelligence or a substandard education. It wasn't a lack of exposure to influences from outside the subculture. The stumbling block was an extensive set of comforting excuses and misconceptions about not sifting my own supernatural ideas very well for accuracy...combined with the nervous reluctance to do so. It was an unjustified and lax approach to the specific ideas I was clutching.

I wasn't looking, or not looking intently enough, at the ideas' supporting links. Like the springs around the edge of a trampoline, an idea's links should be part of the judgment of whether its definition is stable and sturdy under pressure. If it's hardly linked to anything substantial, or the links are tenuous, its meaningfulness deserves less credit.

This metaphor suggests a revealing consequence of the condition of the ideas' links. When the links are tight and abundant, the ideas are less likely to change frequently or radically. An idea that can be abruptly reversed, overturned, rearranged, etc. is more consistent with an idea that's poorly rooted. Perhaps its origin is mostly faddish hearsay. If it can rapidly turn like a Segway for the tiniest reason, it's not showing that it's well-connected to anything that says unchanged day by day.

However, if it turns in a new direction gradually like huge supertanker-class ships, it's showing that its many real links were firmly reinforcing its former position/orientation. Changes would require breaking or loosening the former links, or creeping slightly within the limits that the links permit. By conforming to a lengthy list of clues, an explanation places the same demands on all modifications or substitutions of it. This characteristic is what separates it from a tentative explanation. Tentativeness refers to the upfront warning that it isn't nailed down or solidified. The chances are high that it might be adjusted a lot in the near future in response to facts that are currently unknown.

Although revolutionary developments are exciting, such events call for probing reexamination of the past and maybe more than a little initial doubt. There might be a lesson about improving methods for constructing and analyzing the ideas' links. Why were the discarded ideas followed before but not now? How did that happen?

Amazing upheavals of perspective should be somewhat rare if the perspective has a sound basis. Anyone can claim to have secret or novel ideas that "change everything we thought we knew". The whole point is to specifically not grant them an easy exclusion from the interlinking nature of knowledge. Do their ideas' links to other accurate ideas—logic/proofs, data, calculations, observations, experiments, and so on—either outweigh or invalidate the links of the ideas that they're aiming to replace?

If their ideas are a swerve on the level they describe, then redirecting the bulk of corroborated beliefs ought to resemble turning a supertanker and not a Segway. Of course, this is only an expectation for the typical manner in which a massive revision takes place. It's not a wholesale rejection of the possible need for it. From time to time, the deceptive influence of appealing yet utterly wrong ideas can last and spread for a long time. So it's sensible that the eventual remedy would have an equally large scale. Paradigm shifts serve a purpose.

But they must be exceedingly well-defended to be considered. When part of an idea's introduction is "First forget all you think you know", the shrewd reaction isn't taking this unsolicited advice at face value. It's "The stuff I know is underpinned by piles and piles of confirmations and cross-checks. How exactly does your supposed revelation counter all that?" The apparent "freedom" to impulsively modify or even drop ideas implies that they were just dangling by threads or flapping in the wind to start with.

Thursday, October 26, 2017

unbridled clarity

Sometimes I suddenly notice contradictions between items of common knowledge. If the contradiction is just superficial then it might be presenting an opportunity rather than a problem; harmonizing the items can produce deeper insights. Right now I'm specifically thinking of two generalizations that show up regularly in Skeptic discussions about changes in people's beliefs.

First, solely providing data, no matter how high-quality it is, can be futile due to a backfire effect. The recipient's original viewpoint may further solidify as they use it to invalidate or minimize the data. Especially if they're feeling either threatened or determined to "win", they will frame and contort bare facts to declaw them. The lesson is that more information isn't always a cure-all for faulty thinking.

Second, the rise in casual availability of the internet supposedly lowers the likelihood that anyone can manage to avoid encountering the numerous challenges to their beliefs. Through the screens of their computing devices of all sizes, everyone is said to be firmly plugged into larger society's modern judgments about reality. Around-the-clock exposure ensures that these are given every chance to overturn the stale deceits offered by the traditions of subcultures. So the popular suspicion is that internet usage correlates to decreases in inaccurate beliefs.

At first reading, these two generalizations don't mesh. If additional information consistently leads to nothing beyond a backfire effect, then the internet's greater access to information doesn't help; conversely, if greater access to information via the internet is so decisive, then the backfire effect isn't counteracting it. The straightforward solution to the dilemma is to suggest that each influence is active in different people to different extents. Some have a backfire effect that fully outweighs the information they receive from the internet, while some don't. But why?

I'd say that the factor that separates the two groups is a commitment to "unbridled clarity". Of course, clarity is important for more than philosophical nitpicking. It's an essential concern in many varied situations: communication, planning, mathematics, recipes, law, journalism, education, science, to name a few. This is why the methods of pursuing it are equally familiar: unequivocal definitions, fixed reference points, statements that obey and result from logical rules, comparisons and contrasts, repeatable instructions, standard measurements, to name a few. It's relevant any time that the question "What are you really talking about?" must furnish a single narrow answer...and the answer cannot serve its purpose if it's slippery or shapeless.

If clarity were vigorously applied and not shunned, i.e. unbridled, then it would resist the backfire effect. Ultimately, its methods increase the clarity of one idea by emphatically binding it to ideas which have clarity to spare. A side effect is that the fate of the clarified idea is inescapably bound to the fates of those other ideas. Information that blatantly clashes with them implies clashing with the clarified idea too. When the light of clarity reveals that an idea is bound to observable consequence X, divergent outcome Y would dictate that the idea has a flaw. It needs to be discarded or thoroughly reworked.

Alternatively, the far less rational reaction would be to stubbornly "un-clarify" the discredited idea to salvage it. All that's involved is breaking its former bonds of conspicuous meaningfulness, which turned out to make it too much of a target. In other words, to cease asserting anything definite is to eliminate the risk of being proven incorrect. This is the route of bridled (restrained) clarity. It's a close companion of the backfire effect. Clarity is demoted and muzzled as part of the self-serving backfire effect of smudging the idea's edges, twisting its inner content to bypass pitfalls, and protecting it with caveats.

In the absence of enough clarity, even a torrent of information from the internet can run into a backfire effect. It's difficult to find successful rebuttals for ideas that either mean little in practice or can be made to mean whatever one spontaneously decides. Ideas with murky relationships to reality tend to be immune to contrary details. It should be unsurprising that beliefs of bolder clarity are frequently scorned as "naive" or "simplistic" by seasoned believers who are well-acquainted with the advantages of meager or wholly invisible expectations.

I'm inclined to place a lot of blame on intentional or accidental core lack of clarity about beliefs. But I admit there are other good explanations for why the internet's plentiful information could fail to sway believers. The less encouraging possibility is that, despite all the internet has to offer, they're gorging on the detrimental bytes instead. They're absorbing misleading or fabricated "statistics", erroneous reasoning in support of their current leanings, poor attempts at humor that miss and obscure the main point, manipulative rumors to flatter their base assumptions...

Sunday, October 08, 2017

giving authenticity a spin

I'm guessing that no toy top has led to more furious arguments than the one in the final scene of Inception. The movie previously revealed that its eventual toppling, or alternatively its perpetual spinning, was a signal that the context is reality or a dream. Before this scene, the characters have spent a lengthy amount of time jumping between separate dream worlds. In the end, has the character in the scene emerged back to reality or hasn't he? The mischievous movie-makers purposely edited it to raise the unanswered question.

My interest isn't in resolving that debate, which grew old several years ago. But I appreciate the parallel with the flawed manner in which some people declare the difference between their "real" viewpoints and others' merely illusory viewpoints. They're the true realists and those others are faux realists. They're living in the world as it is, unlike the people who are trapped in an imaginary world. They're comparable to the dream infiltrators who made a successful return journey—or someone who never went in. They've escaped the fictions that continue to fool captive minds. All of their thoughts are now dependable messengers that convey pure reality. They're the most fortunate: they're plugged directly into the rock-bottom Truth.

I'm going to call this simplistic description naive realism. It seems to me that there are more appropriate ways of considering my state of knowledge. And the starting point is to recognize the relevance of the question represented by Inception's last scene. Many, hopefully a large proportion, of my trusted ideas about things have a status closer to real than fantasy. Yet simultaneously, I need to remember the undeniable fact that human ideas have often not met this standard. It's plausible that my consciousness is a mixture of real and fantasy ideas. Essentially, I'm in the ambiguous final movie scene. The top is spinning, but it also seems to have the slightest wobble developing. The consequence is that I can't assume that I'm inhabiting a perfectly real set of ideas.

Nevertheless, the opposite cynical assumption is a mistake too. A probably incomplete escape from fantasy doesn't justify broadly painting every idea as made-up rubbish. The major human predisposition to concoct and spread nonsense doesn't imply that we can only think total nonsense moment by moment. One person or group's errors don't lead to the conclusion that all people or groups are always equally in error. The depressing premise that everybody knows nothing is far, far too drastic. I, for one, detest it.

I'd say that the more honest approach is to stop asserting that someone's whole frame of mind is more real than another's. My preference is to first acknowledge that ideas are the necessary mediators and scaffolding that make sense of raw existence. But then acknowledge that these ideas can, and need to be, checked somehow for authenticity. It's not that one side has exclusive access to unvarnished reality and the other side is drowning in counterfeit tales. Both sides must use ideas—concepts, big-picture patterns, theories—to comprehend things and experiences. So the more practical response is for them to try to sift the authentic ideas from the inauthentic as they're able. The better question to identify their differences is how they defend the notion that their ideas are actual echoes of reality.

The actions that correspond to authenticity come in a wide variety. What or who is the idea's source? What was the source's source? Is the idea consistent with other ideas? Is the idea firmly stated, or is it constantly changing shape whenever convenient? Are the expected outcomes of its realness affecting people's observations or activities? Does it seem incredible, and if so then is it backed by highly credible support to compensate? Would the idea be discarded if it failed, or is it suspiciously preserved no matter how much it fails? Is it ever even tested?

Once again the movie top is a loose metaphor for these confirming details. A top that doesn't quit isn't meeting the conditions for authentic objects similar to it. Of course, by not showing what the top does, part of the intent of the final movie scene is to ask the secondary question of whether people should care. I hope so. It's tougher and riskier to screen ideas for authenticity, but the long-term rewards are worth it.

Saturday, September 23, 2017

what compartments?

For good reason, compartmentalization is considered to be a typical description for how a lot of people think. This means that they have isolated compartments in their minds for storing the claims they accept. Each claim is strictly limited to that suitable area of relevance. Outside its area it has no effect on anything, and nothing outside its area has any effect on it. Imprisoning claims in closed areas frees them to be as opposite as night and day. None are forced to simply be thrown out, and none are allowed to inconveniently collide.

The cost is that defining and maintaining the fine distinctions can be exhausting sometimes. But the reward is a complex arrangement that can be thoroughly comprehended, productively discussed, and flexibly applied. By design it satisfies a variety of needs and situations. Not only are contradictions avoided within a given compartment, but there are also prepared excuses for the contradictions between compartments.

It's such a tidy scheme that it's tempting to assume that this form of compartmentalization is more common than it is. Career philosophers, theologians, scholars, and debaters probably excel at it. Yet I highly doubt that everybody else is always putting that much effort into achieving thoughtful organization, self-coherency, and valid albeit convoluted reasoning. It seems to me that many—including some who offer peculiar responses to surveys—don't necessarily bother having consistent "compartments" at all. The territories of their competing claims are more fluid.

Theirs are more like the areas (volumes? regions?) of water in a lake. The areas are hardly separate but are touching, pushing, exchanging contents, shrinking, growing. People in this frame of mind may confess that their ideas about what's accurate or not aren't anchored to anything solid. Their felt opinion is endlessly shifting back and forth like the tide. Their speculations bob around on the unpredictable currents of their obscure intuitions.

Even so, the areas can be told apart. The areas aren't on the same side of the lake or aren't the same distance from the shore. The analogy is that, if prodded, the believer may roughly identify the major contrasting areas they think about. The moment that their accounts start to waver is when the areas' edges and relative sizes are probed. Tough examples help to expose this tendency. Would they examine pivotal claim Q by the rules of A or B? Perhaps they're hesitant to absolutely choose either option because they sense that committing would imply a cascade of judgments toward topics connected to Q.

In effect, the boundaries they use aren't like compartment walls but like water-proof ropes that run through a spaced series of little floats. These ropes are positioned on the surface of the lake to mark important areas. Unlike the tight lanes in an indoor swim race, if they're tied too loosely they move around a bit. Similarly, although believers may be willing to lay out some wobbly dividing lines on top of their turbulent thoughts, their shallow classifications could be neither weighty nor stable. They may refuse to go into the details of how they reach decisions about borderline cases. They can't offer convincing rationales for their differing reactions to specific claims.

This fuzzy-headed depiction raises a natural question. They certainly can't have consciously constructed all this for themselves. So how did it develop? What leads to "informal compartmentalization" without genuine compartments? The likely answer is that ideas from various sources streamed in independently. Close ideas pooled together into lasting areas, which eventually became large enough to be a destination for any new ideas that were stirred in later. The believer was a basin that multiple ways of thinking drained into. As their surroundings inevitably changed, some external flows surged and some dried up. Whether because of changing popularity or something more substantial, in their eyes some of their peers and mentors rose in trustworthiness and some sank. Over time, the fresh and stagnant areas were doomed to crash inside them.

The overall outcome is like several selves. The self that's most strongly activated in response to one idea isn't the self that would be for some other idea. This kind of context-dependent switching is actually a foremost feature of the brain. Its network structure means that it can model a range of varying patterns of nerve firings, then recall the whole pattern that corresponds to a partial signal. It's built to host and exploit chaotic compartmentalization.

A recurring metaphor for this strategy is a voice vote taken in a big legislature. The diverse patterns etched into the brain call out like legislators when they're prompted. The vote that emerges the loudest wins. The result is essentially statistical, not a purely logical consequence of the input. The step of coming up with a sound justification could happen afterward...or never. The ingrained brain patterns are represented by the areas in the lake. Overlapping patterns, i.e. a split vote, are represented by the unsteady area boundaries.

The main lesson is that a many-sided viewpoint can be the product of passive confusion or willful vagueness, not mature subtlety. Unclear waters may be merely muddy, not deep. Arguing too strenuously with someone before they've been guided into a firm grasp on their own compartmentalization is a waste. It'd be like speaking to them in a language they don't know. One can't presume that they've ever tried to reconcile the beliefs they've idly picked up, or they've noticed the extent of the beliefs' conflicts. It might be more fruitful to first persuade them to take a complete, unflinching inventory of what they stand for and why. (Religious authorities would encourage this too. They'd prefer followers who know—and obey—what their religion "really" teaches.)

Tuesday, September 05, 2017

placebo providence

It might be counterintuitive, but I've found that in some ways the broad topic of religion abruptly became more interesting after I discarded mine. I stopped needing to be defensive about the superiority of my subset of religion and the worthlessness of others. Seeing all of them as mainly incorrect—albeit not all equally awful or decent—grants a new kind of objectivity and openness. Each item is a phenomenon to examine, rather than an inherent treasure or a threat. Additionally, the shift in viewpoint converted some formerly easy questions into more intriguing puzzles. One of these is "How can people sincerely claim that they've experienced specific benefits from their religion's phantoms?"

The answer "the phantoms exist" doesn't sound convincing to me now. But some alternatives are suggested by a longstanding concept from another context: the placebo. Although placebo pills or treatments don't include anything of medical relevance, recipients may report a resulting improvement in their well-being. Placebos famously illustrate that it's not too unusual for something fake to leave a favorable impression. The analogy almost writes itself. In terms of actual causes and effects, earnest belief in a generous spirit is superficially like earnest belief in a sugar pill.

Without further discussion, however, borrowing a complicated concept is a bit of a cheat. To do so is to gloss over too many underlying details. If something is said to work because it acts like a placebo, then...what is it acting like, exactly? The first possibility is that it's truly acting like nothing. As time goes on, people are continually affected by countless things, and the mixture of various factors churns and churns. So cycles occur depending on which things are dominant day by day. Good days follow bad days follow good days follow bad days. With or without the placebo, a good day might still have been coming soon. The good day was a subtle coincidence. This is why testing only once, on one subject, shouldn't be conclusive. Or the subject could've had an abnormally unpleasant day not long ago, and then they had a very average day. 

A second possibility of placebo activity is that the subjects' awareness of it cued them to spend extra effort seeking, noting, and recalling good signs, as well as brushing aside bad signs. It's like telling someone to look up and see a cloud that's shaped like a horse; they might have said the cloud looked like something else if they'd seen it first. Or it's like asking them whether they're sure that they didn't witness a particular detail in the incident that happened yesterday. Their expectations were raised, so perception and memory were skewed. This tendency works by indirectly coloring their reactions to stimuli. So of course it's applicable to subjective outcomes, i.e. just generally feeling better. As anyone would expect, placebos score consistently higher in medical trials for subjective outcomes such as temporary pain relief than in trials for objective outcomes such as shrinking tumors. 

On the other hand, placebos' subjective gains point to a valuable principle. When root causes don't have swift solutions, enhancing the quality of someone's experience of the symptoms is still both feasible and worthwhile. Regulating attention and maintaining a balanced perspective are excellent mitigation strategies. Deflecting consciousness in a productive direction is an ability that can be developed. If that's too ambitious, then at least indulging in positive distraction will help. Shrewd choices about what to minimize and what to magnify lead to a definite, often underrated boost in mood. And it doesn't require lies and placebos.

The last possibility of a placebo's inner workings is that it affects the subject's brain, and then the alteration in the subject's brain adjusts the behavior of other organs. Unfortunately, the amount of control by this route is frequently misunderstood. For example, mental pictures and verbal affirmations don't "will" the immune system into doubling its effectiveness (though an overactive immune system would be a problem too). Keenly wanting doesn't telekinetically rearrange the relevant particles to match.

Nevertheless, a few types of crude brain-body governance are undeniable. These are called...emotions. The body rapidly responds to calm, agitation, fear, aggression, sadness. It's stressed or not stressed. Regardless of the cause being fictional or not, large or small, sudden or ongoing, vivid or abstract, the effect is that comparable signals flow instinctively from the brain to the rest of the body. On the basis of stopping and/or starting the flow of disruptive signals at the source, a placebo's power to bring about a tangible change isn't a total mystery. It'd be more surprising if a substantial reduction in emotional chaos didn't have desirable consequences for the subject and their lives.

These possible explanations for placebos correspond to categories of why people are apparently satisfied by the interventions of their preferred incorporeal entities. The first corresponds to lucky timing. Circumstances were about to brighten with no intervention, so a nonexistent intervention was simply sufficient. The second corresponds to slanted judgment. The thought of the intervention prods the believer to fixate on the upsides and rationalize the downsides. They look harder because they presume that the intervention they believe in had to have done something. The third corresponds to the physical side effects of believing in the intervention. If holding to the belief makes the believer more centered, slower to jump into unhealthy and unwise decisions, and closer to a group of supportive believers, then its rewarding side effects for the believer are substitutes for the missing rewards of the unreliable content of the belief.

One final comment comes to mind. Of all the statements made about placebos, the most curious is the proposal to try to achieve the "placebo effect" in regular clinical practice. To make the prescriptions be ethically acceptable, the recipient is fully informed about what they're getting! This is like the question of why I didn't keep participating in my religion after I realized that it wasn't, um, accurate. The retort is that I lost my motivation to bother taking a pill that I knew was only a placebo.

Saturday, August 19, 2017

advanced friend construction

There's no shortage of unflattering, childlike comparisons to irritate the religiously devout. I know this from both my present position and also when I was on the receiving end. For example, they're caught up in incredible fairytales, or they're hopelessly dependent on the support of an ultimate parental figure, or they're too scared of social pressure to admit that the emperor has no clothes on.

But for today I'm interested in another: that they never outgrew having an imaginary friend. They're obligated to immediately deny this jab because, of course, their particular god isn't a human invention. But I wonder, only half-jokingly, if there's a second strong reason for them to feel offended by the comparison. It echoes the irritation of a forty-year-old when intense dedication to building intricate scale models is equated with an uncomplicated pastime for kids.

Simply put, they aren't casual, sloppy, or immature with what they're doing. They're grown adults who are seriously committed to performing excellent and substantial work, thank-you-very-much.

They most emphatically aren't in a brief juvenile phase. They apply great effort and subtlety to the task of maintaining their imaginary friend. (I should note once again that I'm narrowly describing the kind of religiosity that I've been around, not every kind there is.) They often thank it in response to good events. They often plead for help from it in response to bad events. They study sacred writings about it. They recite and analyze its characteristics. They develop an impression of its personality. They sing about its wonderfulness and accomplishments ("what a friend we have..."). They compare notes and testimonials with other people who say they're dear friends with it too. They sort out which ideas of theirs are actually messages sent from it. They apologize to it when they violate its rules. They attempt to decode its grand plan via the available scraps of clues.

The amount of toil might prompt outsiders to question why a non-imaginary being's existence is accompanied by such a demanding project. This reaction is sensible but fails to appreciate the great opportunity it presents for massive circular reasoning. Because a typical, straightforward imaginary friend doesn't present a large and many-sided challenge, the follower's endless striving indicates that theirs must not be in that category. Why would there be elaborate time-honored doctrines, requiring a sizable amount of education and debate, if theirs were just imaginary all along?

Furthermore, they may point to an additional huge difference that's much more perceptible to them than to an outsider looking in: theirs isn't solely a nice friend to fulfill their wishes and never disagree with them. Theirs is an autocrat of their thoughts and behavior ("lord"). It's far from qualifying as a friendly companion in some circumstances. It sees and judges everything. It forces them to carry out acts (good or bad) which they'd rather not. It dictates a downgraded view of them even as if dictates ceaseless adoration of itself.

All the while, to nonplussed observers they appear to be inflicting psychological self-harm. Or as if something unseen is invading them and turning their own emotions into weapons against themselves. An outrageous parallel of this startling arrangement is the pitiful Ventriloquist in the twisted "Read My Lips" episode of Batman: The Animated Series. He carries and operates a dummy named "Scarface", which looks and talks like an old-fashioned gangster boss. They treat each other like separate individuals. They don't seem to know each other's thoughts. Scarface habitually orders his tolerant underlings to speak to him, not to the mostly quiet and unmoving ventriloquist—whom he calls "the dummy". He's always in charge. He gets angry when the ventriloquist tries to contribute to conversation.

The utter absurdity is that the ventriloquist is the sole physical vehicle for the second personality Scarface, yet he's not immune from the paranoiac's hostility. His bully's existence would be totally impossible without his constant assistance. He's in confinement in his last scene, after Batman has prevailed and the dummy is demolished. And he's keeping himself busy carving a replacement...

I realize this parallel is dramatic in the extreme, although I'd note that the gap between it and some people's self-despising religious mentality is unfortunately smaller than it should be. Generally their creation isn't as uncaring or domineering as Scarface. But nor is it as tame and passive as an imaginary friend. For them, it blooms from out of the soil of their brain activity into a functioning character who exhibits specific qualities. It gathers up several sentiments in a coherent form. It's built from aspects of the self. It starts as a bare outline pencil sketch, then it's repeatedly redrawn and colored in section by section. It takes on a virtual life of its own. Over time, the person's brain manages to acquire a "feel for" the character. Thereafter, even without voluntary control, it can habitually compute the character's expected opinion. The character is an extra "brain program" that's loaded and ready, and like a memory it springs up whenever something activates it. The more single-minded someone is about the character, the greater number of triggers it will have.

The curious and thoroughly mockable catchphrase "What Would Jesus Do?" is one example of intentionally invoking an embellished character to answer a moral question. People similarly use these abilities when they infer that someone they know well would've enjoyed a joke or a song. This is simple empathy redirected in an abstract direction through the flexibility of human intelligence. These common abilities to simulate human or human-like characters—"Theory Of Mind" (ToM) is a well-known label—are the natural outcome of humans' evolutionary advantage of deeply complex social interaction. Noisy movie audiences confirm every day that characters certainly don't need to be nonfictional to be intuitively comprehended and then cheered or booed.

Sometimes, when people exit a religious tradition like the one I did, they may comment that they went through an emotional stage which resembled losing a friend. After spending years of advanced construction on the "friend" they lost, their level of broken attachment is genuine. For me personally, that happened to not be as problematic. I wasn't as...close with my imaginary friend as many are. So my puzzlement overtook my former respect early in the journey out. "Who are you, anyway? You don't make any sense anymore. I think I don't know you at all. Maybe I never did."

Saturday, July 15, 2017

upon further contemplation

I'm guessing this won't come as a shock: the sweeping advice that more faith would've prevented my unbelief fails to dazzle me. From my standpoint it seems equivalent to the bullheaded instruction, "You should have never let yourself revise what you think, no matter what well-grounded information came along, or how implausible or problematic your preferred idea was shown to be." I knowingly discarded the beliefs after open-eyed judgment. If more faith is seriously intended as a defense tactic, then it has a strong resemblance to the ostrich's. (The inaccuracy of the head-burying myth can be ignored for the sake of lighthearted analogy...)

I'm more entertained, but no more convinced, by specific recommendations that would've fortified my beliefs. Contemplative prayer's assorted techniques fit this category. These are said to improve the follower's soul through the aid of quietness, ritual, reflection, and focus. The soul is methodically opened up to unearthly influence. It's pushed to develop an engrossing portrayal of the supernatural realm. It's taught to frequently note and gather signs of this portrayal's existence. The edifying periods of intense concentration might be guided by spiritual mottoes, textual studies, mental images, dogmas. Intervals of fasting and solitude might be employed to heighten attentiveness. Presumably, all this effort goes toward two interlocking goals. First is an inspiring appreciation of God. Second is often having in-depth, warm, productive connections with God, at both scheduled and unscheduled times. Zealous contemplators like to declare that they're "in a relationship, not a religion" and that they walk and talk with God.

Nevertheless, I wouldn't rashly accuse them of telling deliberate lies about the phenomena their techniques lead to. Aside from the embellishment and reinterpretation that inevitably slip in, I don't assume that they're fabricating their entire reports. Dreams aren't perceptible outside of the dreamer's brain either, but that doesn't imply that no dreaming occurred. When they say they sense God, I'm willing to accept that their experience of sensing was triggered in them somehow. If an experience roughly corresponds to the activation of a brain region, then purposely activating the region could recall the experience. Anywhere in the world, a whiff of favorite food can conjure a memory of home.

The actual gap is between the meaning that they attribute to their contemplative techniques and the meaning that I attribute. They claim that they're harnessing the custom-made age-old wisdom of their particular tradition to come into contact with their unique God. But when I reexamine their techniques in a greater context, I can't avoid noticing the many close similarities with sophisticated psychological training. I'm referring to training by the broadest nonjudgmental definition. We're social creatures who have highly flexible brains. We're training each other and ourselves, by large and small degrees, constantly though not always consciously, for a host of admirable or despicable reasons. Where they perceive specialized paths to divinity, I perceive the unexceptional shaping of patterns of behavior and thinking.

No matter the topic, a complicated abstraction is usually a challenge for psychological training. Extra care is needed to ensure that it's memorable, understood, relevant, and stimulating. A number of ordinary exercises and factors can help. Undisturbed repetition is foremost. Obviously, over the short term it stops the abstraction from promptly fading or being pushed out by distractions. But for the knowledge to persist, undisturbed repetition shouldn't be crushed into a single huge session. It should be broken up into several, preferably with evenly spaced time in-between. Each should build on top of the previous. Old items should be reviewed before new items. It also helps when the material is itself put in a repetitive and thoughtful form, in which parts of the new items are reminiscent of parts of the old items. Mnemonics, rhymes, and alliteration have benefits other than stylistic flourishes.

Better still is to supplement undisturbed repetition with active processing. Asking and answering questions about the abstraction forces it to come alive and be comprehended. The questions should be decisive and piercing, not vague, superficial, and easy. The aim is greater clarity. A clear abstraction appears surer and realer than a hazy one. Its familiarity increases as it's meditated on and reused. A secondary effect of active processing is to establish its links to other ideas. Its distinguishing characteristics are exposed. Its boundaries are drawn. It ceases to be a mysterious, remote, solitary blob. Instead it's nestled firmly in its known position by neighboring ideas: it's a bit like this and a bit unlike that.

If possible, the active processing should include personalizing the abstraction. A person may or may not be permitted to adapt it to suit themselves. But in either case, they can translate it into their own words and the symbols they find significant. And they can try to pinpoint informative overlaps between it and their larger perspective. Applying it to a their vital concerns instantly raises its value in their thoughts. Lastly, to the extent that it influences their individual choices, it accumulates a kind of undeniable role in their personal history from then on.

Personalizing an abstraction works because brains have an innate talent for pouncing on information that affects the self. Stories and sense perception are two more brain talents that can be successfully targeted. The brain already has skills for absorbing concrete narratives and sensations. A compelling story is superior at conveying the qualities of someone or something. Visualizing something abstract aids in delivering it into consciousness, regardless of whether the visualization is merely a temporary metaphor. Paradoxical as it sounds, attaching many little sensory details can sometimes be beneficial for retention. Vividness enables an abstraction to grab and hold a bigger slice of awareness. Excessively minimal or dull descriptions engage less of the brain. Although a concise summary is quicker to communicate than a series of differing examples, the series invokes sustained attention. The multiple examples present multiple chances, using several variations, to make at least one enduring impression.

For mostly the same reason, adding a factor of emotion works too: it's a "language" which is built into the brain. It marks information as important. It boosts alertness toward an abstraction. Meanwhile, the flow of associations pushes an understanding of its parts. The parts to be opposed—problems or enemies—are enclosed by a repelling frame. The parts to be welcomed—solutions or allies—are enclosed by an appealing frame. A thorough bond between emotion and an abstraction can last and last. Its potency could potentially rival or exceed the potency of a bond to a tangible object. Such objects can be hobbled by the fatal shortcomings of realistic weaknesses and complex mixes of advantages and disadvantages, which bind to conflicting emotions.

It so happens that all these considerations feature prominently in the contemplative techniques that would've hypothetically sheltered me from unbelief. That's why I conceded earlier that diligent practice of the techniques probably does fulfill the promise...according to the contemplator. When psychological training is carried out well, I'd expect it to be effective at introducing and reinforcing craftily constructed abstractions. The end results are that numerous stimuli spontaneously give rise to the cultivated ideas. The ideas become the lenses for observing everything else. Dislodging them to make room for contrary thoughts starts to feel, um, unthinkable. Contemplators see themselves producing subtler insight into the being that created them and provided them an Earth to live on. People like me see them producing subtler refinements of the being they're continuously creating and for whom they've provided a brain to "live" in.

However, contemplation is doomed to be a flawed source of proof because it has no essential differences from the "more faith" remedy I first criticized. It often functions independently of tested realities outside the brain's. When it's relying on imaginative modes, it operates separately from rigorous argumentation, pro or con. If I'd been more accomplished at it, would my escape have been longer and wobblier? I suppose. Yet I question that I could've fended it off forever.

Friday, June 23, 2017

environmental contamination

When people discard their beliefs about the supernatural, they pose a troubling but inescapable question to those they left behind: why? What prompts someone to alter their allegiances so drastically, after ingesting the One Truth for years and years? They opt to comfort themselves with a medley of convenient explanations. For instance, similar to their obsessions with "purity" in other domains, they can suggest that the apostate's thinking was contaminated. Wicked lies must have poisoned their beliefs, which were originally perfect and intact. If the crafty sabotage had been resisted, the beliefs would've been preserved indefinitely.

In rigid situations, this suggestion really could succeed. Like an impermeable museum display box enclosing an ancient artifact, total isolation does prevent or slow changes to hardened opinions. This is exactly why stricter groups tightly restrict their members' access to outside information in full or in part. The separation is sometimes enforced through an explicit set of rules, sometimes through the social pressure of the group's treatment of the offender.

The obvious weakness with this practice is that it must be extreme, or else contamination will creep in somehow at some time. If the follower is determined to circumvent the barriers, and they're not under constant surveillance and confinement, it will probably fail sooner or later. But if the follower opts for the opposite reaction of internalizing the barriers, the risk of "contamination" drops to near nil. They resolve to act as their own sentinel, eagerly watching out for and squashing potential threats to the beliefs they follow.

When I was younger, I was more like the willing participant than the rebel. I didn't want to consume media that was openly antagonistic to the core beliefs I had. I knew such media would've upset me, so it didn't feel like an appealing use of my time. And in that period I categorized myself as an unshakable follower; I wasn't especially worried about wading into unending philosophical wars. I hadn't dissected my assumptions enough yet. The most potent issue of all, the problem of evil, wasn't urgent to me yet because, er, things in my surrounding egocentric awareness were largely pleasant.

Surprisingly (...or not...), the contamination of my thoughts happened anyway. I didn't need to search far and long for alternatives. As it turned out, these were lurking in my environment. Standard biological evolution is one example of a subject that I eventually stumbled on without trying. I didn't bother to read a lot about it or biology or creationism or Earth's age. The religious groups I was in didn't heavily emphasize the importance of rejecting it, although some individuals, such as the parents who home-schooled, did enthusiastically inject it into discussions. I thought of it as "controversial" among believers—like disagreeing about Bible editions—so perhaps I was cynical that a closer look would give me a firm, worthwhile answer.

My neutrality shifted in high school after someone lent me their copy of the slyly written Darwin's Black Box. It presented "intelligent design" through a breezy mixture of easy-to-read prose, arguments built on commonplace analogies like mousetraps, and technicalities drawn from biochemistry. It provided lengthy chemical names. But like a "For Dummies" book, the major points didn't require deep understanding. In comparison with past creationist works, its official position was "moderate": its single-minded focus was on the alleged infeasibility of gradually evolving microscopic cellular systems, rather than on completely refuting all evolution. Moreover, it underlined its attempt at moderation by conspicuously declining to offer a name for the off-stage Designing Intelligence. No quotations from sacred texts were included. It didn't ask for agreement with dogma. Like typical science books sold to the wide market, it was hopefully aimed at anyone with a casual interest. Sections nearer to the end spelled out the ostensible goal, which wasn't to justify a belief in the author's preferred god. It was to achieve the status of legitimate counterbalancing science. 

After I returned the book, I mostly didn't think about it. I did note that neither it nor its intelligent design ideology were in major news outlets or publications, except in quotes by public figures such as George W. Bush. Biology certainly hadn't been triumphantly upended. In a few years I had perpetual access to broadband internet at college, so one lazy day I remembered it and performed a spontaneous internet search. I discovered that the reasoning of Darwin's Black Box had been speedily dismantled right when it came out. Its list of biochemical challenges was countered by plausible evolutionary pathways. After observing its low reputation in the eyes of the majority of specialists, my previous naive trust in it sank. Of course, if I hadn't read it, maybe I wouldn't have been motivated to browse evolution-favoring websites in the first place.

This wasn't the last time that the one-sided clash of evolution and intelligent design sprang from my environment into my thoughts. Kitzmiller v. Dover came along. It was a trial about inserting references to intelligent design into the curriculum of public school science classes. The author of Darwin's Black Box was one of the witnesses. His ridiculed answers were disastrous for his cause. Although a courtroom isn't the appropriate setting for scientific judgments, the verdict was unequivocal and impressive. Intelligent design wasn't suitable for science class in public school. My vaguely noncommittal attitude turned strongly in favor of evolution. To repeat, I already knew there were believes like I who admitted evolution's accuracy, so this adjustment didn't push me to reconsider everything.

Anyway, biology and geology weren't my usual subjects when I was at the library or the bookstore. I was intrigued by physics and psychology. Nevertheless, these areas transmitted contaminants too. I was nonchalantly skipping books about atheism, but I was reading books that relayed information in the absence of religious slants or premises. I learned that up-to-date physics was amazingly engaging, extensive, and confirmed. But unlike religion, its various findings didn't support the story that anything related to humans, or specifically human-like, was central to the "purpose" or the functioning of the cosmos. In big scales and small, human concerns and abilities weren't essential. Despite their supreme cleverness, they were one species on one planet. Fundamentally they were derivative. They were built out of atomic ingredients and dependent on numerous strategies to redirect entropy for a while.

I absorbed the implicit style of mostly leaving ghostly stuff out of normal physics phenomena. I just assumed that the divine and demonic realms existed apart and parallel in some undetectable, undefined way. The realms' understated interventions on the mundane plane were generally compatible with physics—recovering from an infection or obtaining a new job—except on rare occasions such as starting the universe or resurrecting. In short, the arrangement I settled on was a popular one: more physics-plus than anti-physics. My thoughts were contaminated by an acknowledgment of physics' effectiveness at comprehending things. The symptom of this contamination was that by default I inferred particles and forces at work everywhere, not spirits.

As for psychology, religion's role was more prominent. The trouble was that its role was so frequently described as harmful. It could be tied in with a patient's delusions of paranoia or megalomania, or lead to anxious guilt, or shortsightedly stifle the real root causes of distress. Some thinkers labeled it a sophisticated manifestation of...an infantile coping mechanism. I took some offense at that, though I did take the hint to ensure my beliefs about the supernatural weren't reducible to direct gratification of my emotional needs.

One memory that's grown funnier to me is my head-spinning encounter with the book that grabbed me with its overwrought title, The Origin of Consciousness in the Breakdown of the Bicameral Mind. It explained the Bible as reports of ancient auditory hallucinations. I wasn't nearly ready to take its argument seriously, so its raw effect on me was more emotional in nature than intellectual. I was engulfed in the initial shock that this kind of bold speculation dared to exist. It was inviting me to demote the Bible to a mythological text and the voice of God to a trick of the brain. I hadn't faced these irreverent options so bluntly before. It was like an "out-of-belief experience", temporarily floating out and above the ideas and looking down at them like any collection of cultural artifacts. My faith stayed where it was, but I didn't forget the dizzying sensation of questioning all of it.

I don't want to give the wrong impression about packing my leisure time with education. I read lots of fiction too. Yet it wasn't an environment free from contamination either. When I looked up more of the works by two authors of fiction I'd enjoyed, Isaac Asimov and Douglas Adams, I collided with more atheists again. I read Asimov's "The Reagan Doctrine" online. It was short, but it was so self-possessed and methodical in the facts it applied to break apart the false equivalence of religiosity and moral trustworthiness.

After Adams' death, I bought The Salmon of Doubt without skimming through it. I was anticipating the unfinished Dirk Gently portion. I hadn't known what else was in it. It contained a number of Adams' non-fiction essays and interviews, and several of these featured his strikingly unapologetic atheism. For example, he created the metaphor of an ignorant puddle declaring that its perfectly fitting hole must have been created for itself. I hadn't purchased a book centered around atheism, but nonetheless I had purchased a book with passages that were cheerily atheistic. In the environment of fiction I'd been contaminated by the the realization that there were people who were convinced that I'd been gravely misled all my life...and who had written stuff I liked...and who seemed good-humored and reasonable. They weren't any more detestable or foolish than us followers.

This realization didn't spin me around 180 degrees. Nothing ever did. My reversal was the result of a fitful sequence of little (20-degree?) turns. However, there were a few sources of contamination that immediately preceded my breakthrough: artificial intelligence, information theory, and cognitive science. Like the rest of the contaminants, these didn't develop overnight but started out as innocent-looking seeds. The earliest crucial one was Jeremy Campbell's The Improbable Machine. I was a teen when I picked up this book on impulse (the store had steeply cut its price). It was a wide-ranging exploration of the theme of connectionism: artificial intelligence by mimicry of the brain's evolved layout of legions of tiny interconnected units acting cooperatively. According to it, the opposite extreme was the route of translating all the brain's abilities into orderly chains of logic.

Before, I'd been accustomed to assigning artificial intelligence to the narrow project of constructing bigger and bigger logic machines—the fragile sort that Captain Kirk could drive insane with a handful of contradictory statements. Campbell's thesis was that connectionism was a promising model for the brain's more soulful traditional powers: intuitive leaps, creativity, perception, interpretation of ambiguous language, etc. I was accidentally contaminated by the faint suspicion that souls, i.e. nonphysical spirits/minds, weren't strictly necessary for doing whatever the brain does. I began to imagine that the puzzle was only difficult and complex, not doomed to failure by mind-matter dualism. Tracing the brain's powers to its very structure had the long-term side-effect of casting doubt on blaming something inhabiting the structure.

A decade later, I returned to these topics. My career in software was underway. I regularly visited blogs by other software developers. The recommendation to try Gödel, Escher, Bach showed up more than once. I ignored it for a long time because of my preconceptions. When I finally gave it a chance, Hofstadter's compelling effort stirred my curiosity. I moved on to devouring more of his, and I also checked for more of Campbell's. This time I sped through Grammatical Man, which ignited a prolonged fascination with information theory. I consumed more on this subject. And I paired it with cognitive science, because I wanted to know more about the brain's dazzling usage of information. Amazon's automatic recommendations were helpful. Some books probing consciousness concentrated more on anatomy and some more on philosophical dilemmas. My first Daniel Dennett buy wasn't Breaking The Spell, it was Consciousness Explained.

The accelerating consequences were unstoppable. My desire had been to read about how people think, but the details were often contaminated with well-constructed criticisms of the whole principle of the soul. I'd once been confident that the mystery of inner experiences was invincible. It was a gap that my faith could keep occupying even when all else could be accommodated by nonreligious accounts. Instead, this gap was filled in by the increasingly likely possibility that inner experiences were essentially brain actions.

For me, the scales were tipped. The debate was decided. All the signs of an immaterial layer of reality had been exposed as either absent, illogical, fraudulent or illusory, or at best ambiguously unconvincing. I recognized that I could continue believing...but if I did it would be with the shortcoming that the unsatisfying "truth" of the beliefs exerted no distinguishable difference on reality. If I'd then been familiar with Carl Sagan, I could've compared the beliefs' contents to his famous illustration of an invisible, silent, incorporeal dragon in the garage.

I made a slow, deliberate choice between two sides. Contrary to the contamination excuse, interfering outsiders weren't responsible for misleading me. I wasn't playing with fire, either intentionally or not. I didn't hastily abandon my beliefs as soon as I became aware of another stance. I wasn't an impressionable simpleton who thoughtlessly trusted the internet. Enlarging my knowledge didn't forcibly corrupt or "turn" me. The hazard was never in merely seeing the other side. It was in paying close attention to what each side used to validate itself. The pivotal contamination was learning that the other side pointed to data, mathematics, self-consistency, appreciation of common flaws in human cognition, and prudent restraint before relying on unverified beliefs. But as for the side I'd been taught...

Monday, May 29, 2017

follies of ethnocentrism

More and more as of late, I've noticed that commentary about right-wing American evangelicals has been asserting that their racist leanings are beyond question. I realize that there is excellent past and present evidence that supports this generalization. I agree that, for a substantial portion of them, it's applicable.

However, in the portion who I know through personal experience—which I confess only represents a subgroup of a subgroup—the prejudice I detect is a smidge more complex...or at least flows along more complex channels. For lack of a better label, I'll use "the respectables" to refer to this set. (Yes, I'm glad to report that I'm familiar with some whose warmhearted behaviors and outlooks are unobjectionable. I'm not referring to them.) The respectables are shocked by accusations of racism. After all, they never suggest that race is at the innermost core truth of someone. It's not a biological limit on abilities or personality. It isn't destiny.

Part of the reason why is because the respectables treasure the sentiment that all people are targets for the duty of universal evangelism. Attendees of any race are admitted to public events. In large gatherings, some of the regular goers are likely to be in the so-called "diverse" category. Officially, all people-groups domestic and foreign need to be afforded every chance to convert. Adventurous stories of evangelism in faraway nations are more praiseworthy, not less, when the setting is "exotic", and that goes for the exoticism of the setting's races. Although the pompous and intrusive quest to Christianize all humanity is nauseating, it certainly undermines the very idea of using race alone to disqualify people.

So the respectables aren't hostile toward races in theory. They don't believe (or believe but refrain from saying?) that a race is genetically inferior or superior. Adopting a child of another race is accepted, as is marrying a spouse of another race. Their anxieties have oozed in a less obvious direction. In the most general terms, they're dismissive and fearful of dissimilar cultures. They rank each one according to their estimate of its deviation from their own. Whatever else they're disciples of, they're disciples of ethnocentrism.

Its effects are less vicious than raw racism but unfortunately are tougher to discern and exterminate. It might not motivate angry attacks, but it'll motivate avoidance, or just chilly distance. The barrier it creates isn't necessarily impossible to cross, but in any case it's significant. Individuals face the extra burden of first overturning the immediate verdicts that were assigned to them. They aren't utterly excluded, but they have...unsettling questions surrounding them. In the race for social status, they begin from behind the starting line.

Like any human behavior, the kind of apprehensive ethnocentrism I'm trying to describe doesn't stay neatly contained in the boundaries of its dictionary definition. It's a frequent teammate of the racism of thinking that a race is synonymous with a culture. With this link, the outcome of stigmatizing the culture becomes, well, synonymous with stigmatizing the race. The difference is negated.

Nevertheless, at least race isn't a unique cultural sign. Ethnocentrism's domain is broader than race, because culture itself has many details occurring in endless varieties. The list of additional cultural signs to exploit includes clothing, hair/skin styling, etiquette, economic class, language, slang, accent, geographic region, religious adornment, occupation, food/music preferences. To the ethnocentric person, race as well as any of these may end up functioning as clues of reassurance or dread about the culture that controls the stranger. Because they have rationales about why theirs is the best, they choose which signs matter the most to them and which cultures are better or worse approximations of theirs. A stranger who displays enough signs can be successfully pigeonholed as a member of an admired culture, despite showing some signs of a scorned culture too.

Yet, ethnocentrism's potential to take several observations into account at once cannot manage to compensate for its unfair perceptions. Usually it's already committing a pivotal error: it's really sorting among slanted and incomplete abstractions (impressions, clichés) of cultures. This is to be expected. A vast culture, with a long history and a wide footprint, has numerous types and subsections and components, and upsides and downsides of each. It cannot fit as well into ethnocentrism's coarse rankings of worthiness unless it's drastically pared, flattened, summarized, frozen in time, severed from outside influences. A largely uninformed collection of selected fragments is hammered into a convenient lens. And the distorted lens is used to impatiently classify anyone who has (or seems to have) some of the signs. The problem with this result is predictable. Regardless of the culture's shared elements, it probably accommodates a host of almost opposite opinions on a host of topics. There's no visible hint to distinguish where the stranger falls in this range.

Furthermore, patchy awareness of the culture could be magnified by patchy recognition of the various levels of influence that cultures have. In order to believe that the culture the person supposedly signifies can sufficiently explain them, their capacity to make their own decisions needs to be disregarded. Again there's a spectrum. Some are devotees who fully embrace it. Some opt to be nominal: primarily detach their identities from it. And some are selective in what they absorb or ignore, and these selections can change over time. Depending on their environment, they could simultaneously be selecting from other cultures, even if they're overlooking the logical incompatibility of the mixtures. Or to some degree they could be declining a preexisting path, instead inventing special ideas and novel ways of life. The point is that perhaps the majority of their choices are dictated by a culture, but that can only be speculation until their personal stances are known.

The pitfalls of pursuing ethnocentrism don't end there. Its approach is characterized by warily eying culture mostly from the outside, i.e. not by talking to the participants. It should be no surprise that it's prone to misinterpreting the actual practice and meaning of the contents. The importance of context shouldn't be underestimated. Statements might not be serious or literal. Symbols might have obscure, even ironic, meanings. Problematic items might be rationalized or downplayed. To add to the confusion, the pronouncements published by "authoritative" organizations often diverge from what many of the participants genuinely think. The area of interest should be how the culture is lived, not on naive, superficial analyses of its minutiae. If everyone within it views a puzzling story as a mere exaggeration for dramatic effect, then highlighting the story's outlandishness accomplishes nil. An external critic's disagreement about what is and isn't a figure of speech isn't pertinent to them.

In combination, these considerations justify being initially unmoved by the declaration, "I'm not a deplorable racist—I'm a respectable citizen who's concerned about the cultural mismatches between us and them". Clearly, placing focus on "culture" nonetheless provides an abundance of chances to maintain one-sided ideas about massive numbers of human beings, hastily erase all fine distinctions, and typecast the undeserving. The possible consequence is another pile of simplistic attitudes which are barely an improvement.

Cerebral followers, who've learned their lines well, can swiftly repeat the customary retort to remarks such as these: the horrifying spectre of moral relativism. Namely, they assert that people with positions like mine are obligated to say that the morality of every piece of every culture cannot be judged consistently. But I'm not that extreme (or that hypocritical). I cheer well-done critique, if its aim is narrow and precise. And, as much as possible, I prefer that its aim isn't excessively pointed toward, or conspicuously avoiding pointing toward, any singular group of cultures. Thoughtfully learning then delicately responding to an issue isn't the same as sweeping demonization of an entire way of life or of the people who identify with it. When disputing a specific item, I want to hear an explanation of it violating a nonnegotiable ethical principle, not its level of contradiction with sacred tenets or with some alternative's "common sense". Cultures, like other human creations, have sinister corners and blind spots that thoroughly earn our distaste. But we can extend the courtesy of not presuming that sinister appearances are always correct and of reflecting on whether a startling difference is trivial or shallow rather than perverse.

Sometimes this is easy to decide and sometimes not. If it were never complicated for the people deciding, I'd wonder if they're paying enough attention to the whole picture...or if, like an ethnocentrist, they make up their minds beforehand anyway.

Sunday, May 14, 2017

a question of substance

When one group in the habit of ridiculing an opposing group's beliefs, the easily attacked topics become customary. Mentioning them is so commonplace that no additional explanation is necessary anymore. They typically act as familiar, comforting reference points to casually toss in with other remarks. "Well, we shouldn't be shocked by this, after all. Don't ever forget that the mixed-up people we're talking about also believe _____."

For example, the people on the other "side"—I mean those who still follow the set of beliefs that I scrapped—often parrot the superficial story that a lack of sound religious commitment forces the lack of sound ethical commitments. Their false presumption is that ethics are always shaky and individualized apart from systematized religion's supposed timelessness and objectivity. They imagine that people without religion can't have steady principles to work with. Unbelievers' rootless ethics are to blame for every "incorrect" view and behavior. Their morality is said to be hopelessly deficient because they're inventing right and wrong however they wish.

At one time, I would've glibly agreed that this prejudicial story is self-evident. Needless to say, now I object to virtually every aspect of it, from start to finish. I've become part of the group it stigmatizes and seen for myself that it's wrong about us. Fortunately, we can console ourselves with the numerous targets that religious beliefs richly offer us in return. In my setting, usually these take the form of peculiar Christian myths and doctrines. Transubstantiation certainly fits that demand. It's the doctrine that a ritual can replace the substance of food and drink with the "sacred presence" of Christ. Its plain definition is enough on its own to stand out as bizarre and questionable. Quoting the belief of literally eating the real substance of a god suffices as an open invitation for biting commentary.

Simply put, it presents endless possibilities for wisecracks. Let me emphasize that that's mostly fine with me. I'm not broadly opposed to jokes about an idea...especially the rare jokes which manage to be funny and original. Calling attention to an idea's absurdity shouldn't be confused with "insulting" the idea's followers. Too many nations have created oppressive laws through that confusion. Though, at the same time, the more that a joke strays off topic and verges on outright jeering at people, the less I like it. And back in my former days, the more likely I would've been to briskly skip over the joke's message altogether.

My quibble is something else: the humorous treatment of transubstantiation risks an underappreciation of its twisted philosophical side. When a critic shallowly but understandably chuckles that after transubstantiation the items are visibly no different than before, they're not responding to the doctrine's convoluted trickery. According to its premises, the change wouldn't be expected to be detectable anyway. Everything detectable is dismissed as an attribute (some writings use the word "accident" as a synonym of attribute). But the ritual solely replaces the substance.

The distinction between attribute and substance is strange to us because it's a fossil from old debates. These debates' original purpose was to analyze the relationships among the multiple parts of conscious experience. If someone senses one of their house's interior walls, they may see the color of its paint, feel its texture, measure its height, and so on and so on. Ask them twenty questions about it, and the responses build a long list of attributes. After the wall is repainted or a nail is driven into it to hang a picture, then a few of the wall's attributes have changed. But the wall is still a wall. The substance of what it is hasn't changed. After a demolition crew has blasted away at a brick wall, and left behind a chunk, the chunk is still a wall; it's a wall with smaller attributes. In this scheme, a thing's substance is more like an indirect quality, while direct observations of the thing yield its attributes. The attributes are the mask worn by the thing's substance, and the mask has no holes.

The transubstantiation doctrine reapplies such hairsplitting to justify itself. It proposes that the items' attribute side is kept as-is and Christ's presence is on the items' substance side. By a regularly scheduled miracle, the presence looks like the items, tastes like the items, etc. It's subtler than transformation. How exactly is the process said to occur? The answer is mystery, faith, ineffability, magic, or whatever alternative term or combination of terms is preferred. The doctrine asserts something extraordinary, but then again so do official doctrines about virgin births, 3 gods in 1, eternal souls. Merely saying that it violates common sense isn't enough; common sense can be faulty. And merely highlighting its silly logical implications doesn't address its base flaws.

I think it's a fruitful exercise to articulate why the core of it, the split between attribute and substance, isn't plausible. The first reason is probably uncontroversial to everybody whose beliefs are consistent with materialistic naturalism: human knowledge has progressed in the meantime. We can catalog an extensive set of a thing's innate "attributes" through reliable methods. The discoveries have led to the standpoint of understanding a thing through its attributes of chemical composition, i.e. the mixture of molecules of matter within it and the molecules' states. This standpoint is deservedly applauded for its wide effectiveness because, as far as anyone has convincingly shown, human-scale attributes derive from these composition attributes. (Emergence is a strikingly complex form of derivation. Its turbulent results are collectively derived from the intricate combinations of many varied interactions in a large population.)

Asking another twenty questions to gather more attributes isn't necessary. Ultimately, the composition attributes are exhaustive. Removing or modifying these wouldn't leave untouched a remaining hypothetical "substance" of some kind. These have eliminated the gap in explanation that substance was filling in. These aren't on the surface like the attributes obtained by crudely examining a thing's appearance. The suggestion that all factual examination only goes as deep as a thing's outside shell of attributes stops sounding reasonable when modern examination is fully sophisticated and penetrating.

The second reason why the split between attribute and substance isn't plausible is more debatable, although I sure restate it a lot here: the meanings of thoughts should be yoked with actions and realities (outcomes of actions). The connections might be thinly stretched and/or roundabout. At the moment the actions might only be projected by the corresponding thought(s), but if so then there are unambiguous predictions of the real outcomes once (or if) the projected actions take place. The actions might be transformations of thoughts by the brain: recognition, generalization, deduction, translation, calculation, estimation. Under this rule, thoughts of either a thing's attributes or of its substance could mean less than initially assumed.

Attributes are marked by tight association with particular actions considered in isolation. Wall color is associated with the action of eyeing the wall while it's well-lit or capturing an image with a device that has a sufficient gamut. A wall dimension is associated with the action of laying a tape measure along it from end to end, for instance. Substance is marked by the inexact action of classifying things, perhaps for the goal of communicating successfully about them using apt words. It's an abstraction of a cluster of related characteristics. For a wall, a few of these characteristics are shape, i.e. length and height longer than depth, and function, i.e. enclosing an interior space from an exterior space. When the thing matches "enough" of the cluster, the average person would lean toward classifying it as a wall.

The observer is the one who decides whether to treat a characteristic as a flexible attribute or a member in the abstract cluster of a given thing's "essential substance". This is in line with the composition standpoint, which conveys that particles are indifferent to the more convenient labels people use for larger formations. There isn't anything embedded in each particle that evokes one of these categories over the other. The composition standpoint asserts that particles of the same kind are freely interchangeable if the particles' relevant physical properties are alike.

Indeed, particle interchangeability happens to be doubly significant because, as we all know, things deteriorate. A thing's owners may choose to repair or replace its degraded pieces. When they do, they've removed, added, and altered a multitude of its particles. Yet they may willingly declare that the thing is "the same" thing that it was before, just marginally better. In other words, the substance of it hasn't been lost. Like the famous Ship of Theseus, cycles of restoration could eventually eliminate most of the thing's original particles—which may be presently buried in landfills or turned to ashes by fire. Meanwhile biological contexts withstand continual flows of particles too, as cells die, multiply, adapt. If the action of declaring a thing's substance "unchanged" continues on despite its shifting composition, then the meaning of its substance apparently isn't even bound to the thing's matter itself. Part of the substance has to be a subjective construction.

Nevertheless, the consequence isn't that the entire thought of substance must be discarded as utterly false or fake. The coexistence of objective and subjective parts underpins a host of useful thoughts—such as self-identity. Rather, the need is to remember all the parts of the meaning of substance, to avoid the mistake of interpreting it as if it's an independent quantity or quality. Such a mistake could possibly feed the curious conjecture that a wielder of uncanny powers can seamlessly substitute these independently-existing substances upon request...

Sunday, April 16, 2017

fudge-topped double fudge dipped in fudge

Two truisms to start with. First, finite creatures like us don't have the ability to swiftly uncover all the details of each large/complex thing or situation. However, we still need to work with such things in order to do all sorts of necessary tasks. Second, we may combat our lack of exhaustive knowledge about single cases by extracting and documenting patterns from many cases and carefully fashioning the patterns into reusable models.

I'm using "model" in a philosophically broad sense: it's an abstracted symbolic representation which is suitable for specifying or explaining or predicting. It's a sturdy set of concepts to aid the work of manipulating the information of an object. This work is done by things such as human brains or computing machines. The model feeds into them, or it guides them, or it's generated by them. It reflects a particular approach to envisioning, summarizing, and interpreting its object. Endless forms of models show up in numerous fields. Sometimes a thoroughly intangible model can nonetheless be more tangible and understandable than its "object" (a raw scatter plot?). 

Some models are sketchy or preliminary, and some are refined. Some seem almost indistinguishable from the represented object, and some seem surprising and obscure. A model might include mathematical expressions. It might include clear-cut declarations about the object's characteristic trends, causes, factors, etc. The most prestigious theories of settled science are the models that merit the most credence. But many models are adequate without reaching that rare tier. Whether a model is informal or not, its construction is often slow and painstaking; it comes together by logically analyzing all the information that can be gathered. It's unambiguous, though it might be overturned or superseded eventually. It's fit to be communicated and taught. Chances are, other models of high quality were the bedrock for it; if not, then at minimum these others aren't contradictions of it. It's double-checked and cross-checked. Its sources are identifiable.

Despite the toil that goes into it, the fact is that a typical model is probably incomplete. Comprehensive, decisive data and brilliant insights could be in short supply. Or portions of the model could be intentionally left incomplete/approximate on behalf of practical simplicity. When relevant features vary widely and chaotically from case to case, inflexible models would be expected to perfectly match no case besides the "average". Perhaps it's even possible to model the model's shortcomings, e.g. take a produced number and increase it by 15%.

For whatever reason, the model and some cases will differ to some extent...but improving the model to completely eliminate the individual differences would be infeasible. So the disparity from the model is fudged. I'm using this verb as broadly as I'm using "model". It's whenever the person applying the model decides on an ad hoc adjustment through their preferred vague mix of miscellaneous estimations, hunches, heuristics, and bendable guidelines. Hopefully they've acquired an effective mix beforehand through trial and error. (The result of fudging may be referred to as "a fudge".)

If a realm of understanding is said to be an art as much as a science, then the model is the science and the fudging is the art. In the supposed contrast of theory with practice, the model is the theory and the fudging is one part of the practice. The model is a purer ideal and the fudging is a messier manifestation of the collision with the complexity of circumstance. The model is generally transparent and the fudging is generally opaque. Importantly, a person's model is significantly easier for a second person to recreate than their fudging.

The crux is that a bit of a fudge, especially if everyone is candid about it, is neither unusual nor a serious problem. But overindulgence rapidly becomes a cause for concern. The styles of thinking that characterize normal fudging are acceptable as humble supplements to models but not as substitutes. One possible issue is reliance on implicit, unchecked assumptions. Models embody assumptions too, but model assumptions undergo frank sifting as the model is ruthlessly juxtaposed with real cases of its object.

Another issue is the temptation to cheat: work backwards every time and keep quiet about the appalling inconsistencies among the explanations offered. Someone's slippery claim of fudging their way to the exact answer in case A through simple (in retrospect) steps 1, 2, and 3 should lose credibility after they proceed to claim that they fudged their way to the altogether different exact answer in case B through simple (again, in retrospect) alternative steps 1', 2', and 3'. A model would impose a requirement of self-coherency—or impose the inescapable confession that today's model clashes with yesterday's, and the models' underlying "principles" have been whims.

Yet another issue is the natural predisposition to forget and gloss over the cases whenever fudging was mistaken. The challenge is that new memories attach through, then later are reactivated through, interconnections with prior ideas. But a mismatch implies that the fudging and the case don't have the interconnections. The outcome is that a lasting memory of the mismatch won't form naturally. The primal urge to efficiently seek out relations between ideas comes with the side effect of not expending memory to note the dull occurrence of ideas not having relations. After being picked up, unrecorded misses automatically drift out of memory, like the background noise of an average day. A rigid model counteracts this hole because it enforces conscious extra effort.

The issues with replacing modeling with too many fudges don't stop there. A sizable additional subcategory stems from the customary risk of fudging: judging between hypothetical realities by the nebulous "taste" of each. That method would be allowable...as long as a person's sense of taste for reality is both finely calibrated and subjected to ongoing correction as needed. Sadly, these conditions are frequently not met. Some who imagine that their taste meets the conditions may in actuality be complacent egotists who're incapable of recognizing their taste's flaws.

A few comparisons will confirm that overrated gut feelings and "common" sense originate from personal contexts of experience and culture. Divergent experiences and cultures produce divergent gut feelings and common sense. An infallible ability to sniff out reality wouldn't emerge unless the sniffer were blessed with an infallible context. That's...extremely improbable. Someone in an earlier era might have confidently said, "My long-nurtured impression is that it's quite proper to reserve the privilege and responsibility of voting to the kind of men who own land. Everybody with common sense is acutely aware of the blatant reality that the rest of the population cannot be trusted to make tough political decisions. Opening the vote to them strikes me as foolhardy in the innermost parts of my being."

But context is far from the sole way to affect taste. Propagandists (and marketers) know that repetition is an underestimated strategy. Monotonous associations lose the taste of strangeness. Once someone has "heard a lot" about an assertion, they're more likely to recall it the next time they're figuring out what to think just by fudging. Its influence is boosted further if it's echoed by multiple channels of information. For people who sort reality by taste, its status doesn't need to achieve airtight certainty to be a worthwhile success. Success is achieving the equivocal status that there "must be something to it" because it's been reiterated. A fog of unproven yet pigheaded assertions would be too insubstantial to meaningfully revise a model, but with enough longevity it can evidently spoil or sweeten a reality's taste by a few notches.

Repetition is clumsy, though. Without a model to narrow its focus, the taste for reality is susceptible to cleverer attacks. Taste is embodied in a brain packed with attractions and aversions. The ramification is that emotional means can greatly exaggerate scarce support. A scrap in a passionately moving frame manages to alter taste very well. In the midst of weighing perspectives by impromptu fudging, the stirring one receives disproportionate attention. If the scale has been tilted masterfully, someone will virtually recoil from the competing perspectives. Gradually establishing the plausibility of a model is a burden compared to merely tugging on the psychological reins.

If distorting taste by exploiting the taster's wants seems brazen, the subtler variation is exploiting what the taster wants to believe. It may be said that someone is already half-convinced of notions that would fit snugly into their existing thoughts. The desire for a tidy outlook can be a formidable ally. It's not peculiar to favor the taste of a reality with fewer independent pieces and pieces that aren't in dischord. The more effortlessly the pieces fall into place, the better. Purposefully crafting the experience that a proposal gratifies this component of taste is like crafting the experience that it's as sound as a mathematical theorem. It will appear not only right but indisputable. Searching will immediately stop, because what other possibility could be more satisfying? ...Then again, some unexpected yet well-verified models have been valued all the more as thrilling antidotes to small-mindedness.

This extended list of weaknesses suggests that compulsive fudgers are the total opposite of model thinkers. However, the universe is complicated, so boundaries blur. To repeat from earlier, model thinkers regularly have the need to fudge the admitted limits of the models. And the reverse has been prevalent at various times and locations in human history: fudge after fudge leads to the eventual fabrication of a quasi-model. The quasi-model might contain serviceable fragments laid side by side with drivel. It might contain a combination of valid advice and invalid rationales for the advice. The quasi-model is partially tested, but the records of its testing tend to be patchy and expressed in all-or-nothing terms. It could be passed down from generation to generation in one unit, but there's uncertainty about which parts have passed or failed testing and to what degree.

Once someone does something more than fudge in regard to the quasi-model, it might develop into a legitimate model. Or, its former respectability might be torn down. The dividing line between quasi-model and model is a matter of judgment. If it's resting on fudges on top of fudges, then signs point to quasi-model.