Saturday, December 09, 2017

hunting for license

Followers of materialistic naturalism like myself have a reputation of scoffing at wishful thinking. We're pictured as having an unhealthy obsession on bare inhuman facts. Despite that, I'm well aware that one of my own ideals is closer to a wish than to a reality a lot of the time. I refer to the recommended path to accurate thoughts. It starts with collecting all available leads. After the hard work of collection comes the tricky task of dispassionately sorting and filtering the leads by trustworthiness. Once sorting and filtering are done, then the more trustworthy leads form the criteria for judging among candidate ideas. The sequence is akin to estimating a landmark's position after taking compass bearings from three separate locations, not after a single impromptu guess by eye.

If I'm reluctantly conceding that this advice isn't always put in practice, then why not? What are people doing in its place? We're all creatures who by nature avoid pain and loss, including the pain of alarming ideas and the loss of ideas that we hold. That's why many people substitute the less risky objective of seeking out the leads which would permit them to retain the ideas they cherish for reasons besides accuracy. They're after information and arguments to give them license to stay put. As commentators have remarked again and again, the longest-lasting fans of the topics or debates of religious apologetics (or the dissenting counter-apologetics) are intent on cheering their side—not on deciding to switch anytime soon.

Their word choices stand out. They ask whether they can believe X without losing respect, or whether they must believe Y. Moreover, they significantly don't ask which idea connects up to the leads with less effort than the others...or which idea introduces fewer shaky speculations than the others. The crux is that their darling idea isn't absurd and it also isn't manifestly contrary to one or more indisputable discoveries. They may still care a bit about its accuracy relative to competing ideas, but by comparison this quality is an afterthought. They're gratified as long as the idea's odds exceed a passable threshold. They can honestly envision that the idea could be valid in some sense. The metaphor isn't searching for a loophole but snatching up every usable shim to fix the loose fit of their idea within the niches that need filling. The undisguised haphazardness of it at least ensures that it's adaptable and versatile.

Of course, at root it's a product of compromise for people who are trying to navigate all the forces which push and pull at them. It soothes them about the lower priority they're consciously or unconsciously assigning to accuracy. By scraping together an adequate collection of leads to make an idea viable, they're informing everyone, including themselves, that their selection of the idea isn't unreasonable. If the pursuit of authenticity were a game, they'd be insisting that their idea isn't out of bounds.

Irritatingly, one strange outcome of their half-cured ignorance might be an overreaction of blind confidence. The brashest of them might be moved to declare that their idea is more than just allowable; it's a first-rate "alternative choice" that's as good or better than any other. By transforming it to a matter of equal preference, they can be shameless about indulging their preference.

In isolated cases it really could be. But the relevant point is that, successful or not, the strategy they used was backwards. They didn't go looking honestly for leads to the top idea. All they wanted was greater license to keep the idea they were already carrying with them while they looked. In effect, thinking in these terms motivates more than a mere bias toward noticing confirmations of their idea: they're sent on the hunt. Needless to say, they can't expect anybody else to be as enchanted by their hunt's predictable findings.

Friday, December 01, 2017

trading posts

From time to time I'm reminded that it's misguided to ask, "Why is it that faulty or unverified ideas not only survive but thrive?" In my opinion the opposite question is more appropriate: "What has prevented, or could prevent, faulty or unverified ideas from achieving even more dominance?" The latter recognizes the appalling history of errors which have seized entire populations in various eras. The errors sprout up in all the corners of culture. The sight of substandard ideas spreading like weeds isn't exceptional; it's ageless.

The explanations are ageless too. One of these is that the ideas could be sitting atop a heaping pile of spellbinding stories of dubious origin. After a story has drawn its audience toward an unreal idea, it doesn't vanish. Its audience reproduces it. It might mutate in the process. When it's packaged with similar stories, its effect multiplies. Circles of people go on to trade the stories eagerly, because when they do they're trading mutual reassurance that they're right. There's a balance at work. Possessing uncommon knowledge is thrilling, especially when it's said to be both highly valuable and "suppressed". But the impression that at least a few others subscribe to the same arcane knowledge shields each of them from the doubt that they're just fantasizing or committing an individual mistake.

As is typical, the internet intensifies this pattern of human behavior rather than creates it out of nothing. It's a newer medium, albeit with a tremendous gain in convenience and visibility compared to older forms. In the past, the feat of trading relatively obscure stories depended on printed material such as newsletters or pamphlets or rare books. Or it happened gradually through a crooked pipeline of conversations that probably distorted the "facts" more and more during the trip. Or it was on the agenda of meetings quietly organized by an interested group that had to already exist in the local area.

Or a story could be posted up somewhere for its intended audience. This method especially benefited from the internet's speed and wide availability. Obviously, a powerful computer (or huge data center full of computers) with a memorable internet address can provide a spectacular setting for modern electronic forms of these posted stories—which have ended up being called "posts". Numerous worldwide devices can connect to the published address to rapidly store and retrieve the posts. Whether the specific technology is a bulletin board system, a newsgroup, an online discussion forum, or a social media website, the result is comparable. Whole virtual communities coalesce around exchanging posts.

Undoubtedly, these innovations have had favorable uses. But these also have supplied potent petri dishes for hosting the buildup of the deceptive, apocryphal stories that boost awful ideas. So when people perform "internet research", they may trip over these. The endlessly depressing trend is for the story that's more irresistible, and/or more smoothly comprehended, to be duplicated and scattered farther. Unfortunately that story won't necessarily be the most factual one. After all, a story that isn't held back by accuracy and complex nuance is freer to take any enticing shape.

It might be finely-tuned in its own way, though. A false story that demands almost no mental effort from its audience might provoke disbelief at its easiness. It would have too close of a resemblance to a superficial guess. That's avoided by a false story that demands a nugget of sustained but not strenuous mental effort—or a touch of inspired cleverness. It more effectively gives its audience a chance of feeling smug that now they're part of the group who're smart and informed enough to know better.

I wish I knew a quick remedy for this disadvantage. I guess the superior ideas need to have superior stories, or a mass refusal to tolerate stores with sloppy substantiation needs to develop. Until then the unwary public will be as vulnerable as they've ever been to the zealous self-important traders of hollow stories and to the fictional "ideas" the stories promote.

Monday, November 27, 2017

no stretching necessary

I'm often miffed at the suggestion that my stance is particularly extremist. According to some critics, materialistic naturalism is an excessive interpretation of reality. It's a stretch. They submit that its overall judgment is reaching far beyond the tally of what's known and what isn't. It's displaying its own type of dogmatism: clinging too rigidly to stark principles. It's declaring a larger pattern that isn't really present.

This disagreement about who's stretching further is a useful clue about fundamental differences. It highlights the split in our assumptions about the proper neutral starting point of belief. When two stances have little overlap, the temptation is to hold up a weak mixture of the two as the obvious default, i.e. the most minimal and impartial position. Like in two-party politics, the midpoint between the two poles receives the label of "moderate center". As the poles change, the center becomes something else. I gladly admit that according to their perceived continuum of beliefs about the supernatural domain's level of activity and importance, mine lies closer to one of the clear-headed ends than to something mushier.

From my standpoint, their first mistake is imposing the wrong continuum. Applying it to me is like saying that the length of my fingernails is minuscule next to a meter stick. Although a continuum can be drawn to categorize me as having an extreme stance, the attempt doesn't deserve attention until the continuum itself is defended. It's not enough to decree that people more or less radical depending on how much their stance differs from yours. For a subjective estimation to be worth hearing, the relative basis for the estimation needs to be laid out fairly. Part of that is acknowledging the large role played by culture. Rarities in one culture might be commonplace in a second. Many times, perhaps most of the time, the customary continuum of "normal" belief isn't universal but a reflection of the setting.

The good news is that the preferable alternative isn't strange or complicated: it's the same kind of continuum of belief that works so wonderfully in myriad contexts besides this one. This kind begins with the stance that people have before they've heard of the belief. This beginning stance is perfect uncertainty about the belief's accuracy. It's at 0, neither positive nor negative.

Yet until the scales are tipped for high-quality reasons, the advisable approach is to continue thinking and acting on the likely guess that the belief isn't to be trusted. This is a superb tactic simply because unreliable beliefs are vastly cheaper to develop than reliable beliefs—the unreliable should be expected to outnumber the reliable. In the very beginning, before more is known, to make an unjustified leap to a strongly supportive stance about the belief...would be a stretch.

That's only one point of the continuum. But the rule for the rest is no more exotic: the intensity of belief corresponds to the intensity of validation. Information raises or lowers the willingness to "bet" on the belief to serve some purpose. The information accumulates, which implies that new information doesn't necessarily replace older information. Each time, it's important to ask whether the belief is significantly better at explaining information than mere statistical coincidence.

A popular term for this kind of continuum is Bayesian. Or, to borrow a favorite turn of phrase, it could be called the kind that's focused on fooling ourselves less. It's a contrast to the myth-making kind of continuum of belief in which stances are chosen based on the familiar belief's inherent appeal or its cultural dominance to each individual believer. At the core, Bayesian continua are built around the ideal of studiously not overdoing acceptance of a belief. This is why it's a futile taunt to characterize a Bayesian continuum stance as a fanatical overreach. The continuum of how alien a stance feels to someone is entirely separate. For that matter, when the stance is more unsettling largely as a result of staying strictly in line with the genuinely verified details, the reaction it provokes might be an encouraging sign that pleasing the crowd isn't its primary goal. If someone is already frequently looking down to check that they're on solid ground, they won't be disturbed by the toothless charge that they've stepped onto someone else's definition of thin ice.

The accusation has another self-defeating problem: the absolutist closed-mindedness that it's attacking isn't applicable to most people of my stance. All that's needed is to actually listen to what we say when we're politely asked. Generally we're more than willing to admit the distinction between impossible and improbable beliefs. We endorse and follow materialistic naturalism, but we simultaneously allow that it could be missing something frustratingly elusive. We could be shown that it needs to be supplemented by a secondary stance.

But by now, after a one-sided history of countless missed opportunities for supernatural stuff to make itself plain, and steadily shrinking gaps for it to be hiding in, the rationale would need to be stunningly dramatic. It would need to be something that hasn't come along yet, such as a god finally speaking distinctly to the masses of planet Earth. Corrupted texts, personal intuitions, and glory-seeking prophets don't suffice. The common caricatures of us are off-target. We aren't unmovable. We'd simply need a lot more convincing to prod us along our Bayesian continua. (The debatable exceptions are hypothetical beings defined by mutually contradictory combinations of characteristics; logic fights against the existence of these beings.)

There is a last amusing aspect of the squabble over the notion that materialistic naturalism stretches too far to obtain its conclusions. Regardless of my emphatic feelings that intermediate stances aren't the closest match to the hard-earned knowledge that's available, I'm sure that I'm not alone in preferring that more people followed these in place of some others. While I disagree that their beliefs are more plausible than mine, deists/pantheists/something-ists upset me less than the groups whose gods are said to be obsessed with interfering in human lives. I wouldn't be crushed if the single consequence of losing were more people drifting to the supposed "middle ground" of inoffensive and mostly empty supernatural concepts.

Because outside of staged two-person philosophical dialogues, it's a short-sighted strategy to argue that my stance presumes too much. It'd only succeed in flipping people from mine to the arguer's stance after they added the laughable claims that theirs somehow presumes less than mine, and that it presumes less than deism/pantheism/something-ism...

Saturday, November 11, 2017

supertankers and Segways

Back when my frame of mind was incorrect yet complacent, the crucial factor wasn't impaired/lazy intelligence or a substandard education. It wasn't a lack of exposure to influences from outside the subculture. The stumbling block was an extensive set of comforting excuses and misconceptions about not sifting my own supernatural ideas very well for accuracy...combined with the nervous reluctance to do so. It was an unjustified and lax approach to the specific ideas I was clutching.

I wasn't looking, or not looking intently enough, at the ideas' supporting links. Like the springs around the edge of a trampoline, an idea's links should be part of the judgment of whether its definition is stable and sturdy under pressure. If it's hardly linked to anything substantial, or the links are tenuous, its meaningfulness deserves less credit.

This metaphor suggests a revealing consequence of the condition of the ideas' links. When the links are tight and abundant, the ideas are less likely to change frequently or radically. An idea that can be abruptly reversed, overturned, rearranged, etc. is more consistent with an idea that's poorly rooted. Perhaps its origin is mostly faddish hearsay. If it can rapidly turn like a Segway for the tiniest reason, it's not showing that it's well-connected to anything that says unchanged day by day.

However, if it turns in a new direction gradually like huge supertanker-class ships, it's showing that its many real links were firmly reinforcing its former position/orientation. Changes would require breaking or loosening the former links, or creeping slightly within the limits that the links permit. By conforming to a lengthy list of clues, an explanation places the same demands on all modifications or substitutions of it. This characteristic is what separates it from a tentative explanation. Tentativeness refers to the upfront warning that it isn't nailed down or solidified. The chances are high that it might be adjusted a lot in the near future in response to facts that are currently unknown.

Although revolutionary developments are exciting, such events call for probing reexamination of the past and maybe more than a little initial doubt. There might be a lesson about improving methods for constructing and analyzing the ideas' links. Why were the discarded ideas followed before but not now? How did that happen?

Amazing upheavals of perspective should be somewhat rare if the perspective has a sound basis. Anyone can claim to have secret or novel ideas that "change everything we thought we knew". The whole point is to specifically not grant them an easy exclusion from the interlinking nature of knowledge. Do their ideas' links to other accurate ideas—logic/proofs, data, calculations, observations, experiments, and so on—either outweigh or invalidate the links of the ideas that they're aiming to replace?

If their ideas are a swerve on the level they describe, then redirecting the bulk of corroborated beliefs ought to resemble turning a supertanker and not a Segway. Of course, this is only an expectation for the typical manner in which a massive revision takes place. It's not a wholesale rejection of the possible need for it. From time to time, the deceptive influence of appealing yet utterly wrong ideas can last and spread for a long time. So it's sensible that the eventual remedy would have an equally large scale. Paradigm shifts serve a purpose.

But they must be exceedingly well-defended to be considered. When part of an idea's introduction is "First forget all you think you know", the shrewd reaction isn't taking this unsolicited advice at face value. It's "The stuff I know is underpinned by piles and piles of confirmations and cross-checks. How exactly does your supposed revelation counter all that?" The apparent "freedom" to impulsively modify or even drop ideas implies that they were just dangling by threads or flapping in the wind to start with.

Thursday, October 26, 2017

unbridled clarity

Sometimes I suddenly notice contradictions between items of common knowledge. If the contradiction is just superficial then it might be presenting an opportunity rather than a problem; harmonizing the items can produce deeper insights. Right now I'm specifically thinking of two generalizations that show up regularly in Skeptic discussions about changes in people's beliefs.

First, solely providing data, no matter how high-quality it is, can be futile due to a backfire effect. The recipient's original viewpoint may further solidify as they use it to invalidate or minimize the data. Especially if they're feeling either threatened or determined to "win", they will frame and contort bare facts to declaw them. The lesson is that more information isn't always a cure-all for faulty thinking.

Second, the rise in casual availability of the internet supposedly lowers the likelihood that anyone can manage to avoid encountering the numerous challenges to their beliefs. Through the screens of their computing devices of all sizes, everyone is said to be firmly plugged into larger society's modern judgments about reality. Around-the-clock exposure ensures that these are given every chance to overturn the stale deceits offered by the traditions of subcultures. So the popular suspicion is that internet usage correlates to decreases in inaccurate beliefs.

At first reading, these two generalizations don't mesh. If additional information consistently leads to nothing beyond a backfire effect, then the internet's greater access to information doesn't help; conversely, if greater access to information via the internet is so decisive, then the backfire effect isn't counteracting it. The straightforward solution to the dilemma is to suggest that each influence is active in different people to different extents. Some have a backfire effect that fully outweighs the information they receive from the internet, while some don't. But why?

I'd say that the factor that separates the two groups is a commitment to "unbridled clarity". Of course, clarity is important for more than philosophical nitpicking. It's an essential concern in many varied situations: communication, planning, mathematics, recipes, law, journalism, education, science, to name a few. This is why the methods of pursuing it are equally familiar: unequivocal definitions, fixed reference points, statements that obey and result from logical rules, comparisons and contrasts, repeatable instructions, standard measurements, to name a few. It's relevant any time that the question "What are you really talking about?" must furnish a single narrow answer...and the answer cannot serve its purpose if it's slippery or shapeless.

If clarity were vigorously applied and not shunned, i.e. unbridled, then it would resist the backfire effect. Ultimately, its methods increase the clarity of one idea by emphatically binding it to ideas which have clarity to spare. A side effect is that the fate of the clarified idea is inescapably bound to the fates of those other ideas. Information that blatantly clashes with them implies clashing with the clarified idea too. When the light of clarity reveals that an idea is bound to observable consequence X, divergent outcome Y would dictate that the idea has a flaw. It needs to be discarded or thoroughly reworked.

Alternatively, the far less rational reaction would be to stubbornly "un-clarify" the discredited idea to salvage it. All that's involved is breaking its former bonds of conspicuous meaningfulness, which turned out to make it too much of a target. In other words, to cease asserting anything definite is to eliminate the risk of being proven incorrect. This is the route of bridled (restrained) clarity. It's a close companion of the backfire effect. Clarity is demoted and muzzled as part of the self-serving backfire effect of smudging the idea's edges, twisting its inner content to bypass pitfalls, and protecting it with caveats.

In the absence of enough clarity, even a torrent of information from the internet can run into a backfire effect. It's difficult to find successful rebuttals for ideas that either mean little in practice or can be made to mean whatever one spontaneously decides. Ideas with murky relationships to reality tend to be immune to contrary details. It should be unsurprising that beliefs of bolder clarity are frequently scorned as "naive" or "simplistic" by seasoned believers who are well-acquainted with the advantages of meager or wholly invisible expectations.

I'm inclined to place a lot of blame on intentional or accidental core lack of clarity about beliefs. But I admit there are other good explanations for why the internet's plentiful information could fail to sway believers. The less encouraging possibility is that, despite all the internet has to offer, they're gorging on the detrimental bytes instead. They're absorbing misleading or fabricated "statistics", erroneous reasoning in support of their current leanings, poor attempts at humor that miss and obscure the main point, manipulative rumors to flatter their base assumptions...

Sunday, October 08, 2017

giving authenticity a spin

I'm guessing that no toy top has led to more furious arguments than the one in the final scene of Inception. The movie previously revealed that its eventual toppling, or alternatively its perpetual spinning, was a signal that the context is reality or a dream. Before this scene, the characters have spent a lengthy amount of time jumping between separate dream worlds. In the end, has the character in the scene emerged back to reality or hasn't he? The mischievous movie-makers purposely edited it to raise the unanswered question.

My interest isn't in resolving that debate, which grew old several years ago. But I appreciate the parallel with the flawed manner in which some people declare the difference between their "real" viewpoints and others' merely illusory viewpoints. They're the true realists and those others are faux realists. They're living in the world as it is, unlike the people who are trapped in an imaginary world. They're comparable to the dream infiltrators who made a successful return journey—or someone who never went in. They've escaped the fictions that continue to fool captive minds. All of their thoughts are now dependable messengers that convey pure reality. They're the most fortunate: they're plugged directly into the rock-bottom Truth.

I'm going to call this simplistic description naive realism. It seems to me that there are more appropriate ways of considering my state of knowledge. And the starting point is to recognize the relevance of the question represented by Inception's last scene. Many, hopefully a large proportion, of my trusted ideas about things have a status closer to real than fantasy. Yet simultaneously, I need to remember the undeniable fact that human ideas have often not met this standard. It's plausible that my consciousness is a mixture of real and fantasy ideas. Essentially, I'm in the ambiguous final movie scene. The top is spinning, but it also seems to have the slightest wobble developing. The consequence is that I can't assume that I'm inhabiting a perfectly real set of ideas.

Nevertheless, the opposite cynical assumption is a mistake too. A probably incomplete escape from fantasy doesn't justify broadly painting every idea as made-up rubbish. The major human predisposition to concoct and spread nonsense doesn't imply that we can only think total nonsense moment by moment. One person or group's errors don't lead to the conclusion that all people or groups are always equally in error. The depressing premise that everybody knows nothing is far, far too drastic. I, for one, detest it.

I'd say that the more honest approach is to stop asserting that someone's whole frame of mind is more real than another's. My preference is to first acknowledge that ideas are the necessary mediators and scaffolding that make sense of raw existence. But then acknowledge that these ideas can, and need to be, checked somehow for authenticity. It's not that one side has exclusive access to unvarnished reality and the other side is drowning in counterfeit tales. Both sides must use ideas—concepts, big-picture patterns, theories—to comprehend things and experiences. So the more practical response is for them to try to sift the authentic ideas from the inauthentic as they're able. The better question to identify their differences is how they defend the notion that their ideas are actual echoes of reality.

The actions that correspond to authenticity come in a wide variety. What or who is the idea's source? What was the source's source? Is the idea consistent with other ideas? Is the idea firmly stated, or is it constantly changing shape whenever convenient? Are the expected outcomes of its realness affecting people's observations or activities? Does it seem incredible, and if so then is it backed by highly credible support to compensate? Would the idea be discarded if it failed, or is it suspiciously preserved no matter how much it fails? Is it ever even tested?

Once again the movie top is a loose metaphor for these confirming details. A top that doesn't quit isn't meeting the conditions for authentic objects similar to it. Of course, by not showing what the top does, part of the intent of the final movie scene is to ask the secondary question of whether people should care. I hope so. It's tougher and riskier to screen ideas for authenticity, but the long-term rewards are worth it.

Saturday, September 23, 2017

what compartments?

For good reason, compartmentalization is considered to be a typical description for how a lot of people think. This means that they have isolated compartments in their minds for storing the claims they accept. Each claim is strictly limited to that suitable area of relevance. Outside its area it has no effect on anything, and nothing outside its area has any effect on it. Imprisoning claims in closed areas frees them to be as opposite as night and day. None are forced to simply be thrown out, and none are allowed to inconveniently collide.

The cost is that defining and maintaining the fine distinctions can be exhausting sometimes. But the reward is a complex arrangement that can be thoroughly comprehended, productively discussed, and flexibly applied. By design it satisfies a variety of needs and situations. Not only are contradictions avoided within a given compartment, but there are also prepared excuses for the contradictions between compartments.

It's such a tidy scheme that it's tempting to assume that this form of compartmentalization is more common than it is. Career philosophers, theologians, scholars, and debaters probably excel at it. Yet I highly doubt that everybody else is always putting that much effort into achieving thoughtful organization, self-coherency, and valid albeit convoluted reasoning. It seems to me that many—including some who offer peculiar responses to surveys—don't necessarily bother having consistent "compartments" at all. The territories of their competing claims are more fluid.

Theirs are more like the areas (volumes? regions?) of water in a lake. The areas are hardly separate but are touching, pushing, exchanging contents, shrinking, growing. People in this frame of mind may confess that their ideas about what's accurate or not aren't anchored to anything solid. Their felt opinion is endlessly shifting back and forth like the tide. Their speculations bob around on the unpredictable currents of their obscure intuitions.

Even so, the areas can be told apart. The areas aren't on the same side of the lake or aren't the same distance from the shore. The analogy is that, if prodded, the believer may roughly identify the major contrasting areas they think about. The moment that their accounts start to waver is when the areas' edges and relative sizes are probed. Tough examples help to expose this tendency. Would they examine pivotal claim Q by the rules of A or B? Perhaps they're hesitant to absolutely choose either option because they sense that committing would imply a cascade of judgments toward topics connected to Q.

In effect, the boundaries they use aren't like compartment walls but like water-proof ropes that run through a spaced series of little floats. These ropes are positioned on the surface of the lake to mark important areas. Unlike the tight lanes in an indoor swim race, if they're tied too loosely they move around a bit. Similarly, although believers may be willing to lay out some wobbly dividing lines on top of their turbulent thoughts, their shallow classifications could be neither weighty nor stable. They may refuse to go into the details of how they reach decisions about borderline cases. They can't offer convincing rationales for their differing reactions to specific claims.

This fuzzy-headed depiction raises a natural question. They certainly can't have consciously constructed all this for themselves. So how did it develop? What leads to "informal compartmentalization" without genuine compartments? The likely answer is that ideas from various sources streamed in independently. Close ideas pooled together into lasting areas, which eventually became large enough to be a destination for any new ideas that were stirred in later. The believer was a basin that multiple ways of thinking drained into. As their surroundings inevitably changed, some external flows surged and some dried up. Whether because of changing popularity or something more substantial, in their eyes some of their peers and mentors rose in trustworthiness and some sank. Over time, the fresh and stagnant areas were doomed to crash inside them.

The overall outcome is like several selves. The self that's most strongly activated in response to one idea isn't the self that would be for some other idea. This kind of context-dependent switching is actually a foremost feature of the brain. Its network structure means that it can model a range of varying patterns of nerve firings, then recall the whole pattern that corresponds to a partial signal. It's built to host and exploit chaotic compartmentalization.

A recurring metaphor for this strategy is a voice vote taken in a big legislature. The diverse patterns etched into the brain call out like legislators when they're prompted. The vote that emerges the loudest wins. The result is essentially statistical, not a purely logical consequence of the input. The step of coming up with a sound justification could happen afterward...or never. The ingrained brain patterns are represented by the areas in the lake. Overlapping patterns, i.e. a split vote, are represented by the unsteady area boundaries.

The main lesson is that a many-sided viewpoint can be the product of passive confusion or willful vagueness, not mature subtlety. Unclear waters may be merely muddy, not deep. Arguing too strenuously with someone before they've been guided into a firm grasp on their own compartmentalization is a waste. It'd be like speaking to them in a language they don't know. One can't presume that they've ever tried to reconcile the beliefs they've idly picked up, or they've noticed the extent of the beliefs' conflicts. It might be more fruitful to first persuade them to take a complete, unflinching inventory of what they stand for and why. (Religious authorities would encourage this too. They'd prefer followers who know—and obey—what their religion "really" teaches.)

Tuesday, September 05, 2017

placebo providence

It might be counterintuitive, but I've found that in some ways the broad topic of religion abruptly became more interesting after I discarded mine. I stopped needing to be defensive about the superiority of my subset of religion and the worthlessness of others. Seeing all of them as mainly incorrect—albeit not all equally awful or decent—grants a new kind of objectivity and openness. Each item is a phenomenon to examine, rather than an inherent treasure or a threat. Additionally, the shift in viewpoint converted some formerly easy questions into more intriguing puzzles. One of these is "How can people sincerely claim that they've experienced specific benefits from their religion's phantoms?"

The answer "the phantoms exist" doesn't sound convincing to me now. But some alternatives are suggested by a longstanding concept from another context: the placebo. Although placebo pills or treatments don't include anything of medical relevance, recipients may report a resulting improvement in their well-being. Placebos famously illustrate that it's not too unusual for something fake to leave a favorable impression. The analogy almost writes itself. In terms of actual causes and effects, earnest belief in a generous spirit is superficially like earnest belief in a sugar pill.

Without further discussion, however, borrowing a complicated concept is a bit of a cheat. To do so is to gloss over too many underlying details. If something is said to work because it acts like a placebo, then...what is it acting like, exactly? The first possibility is that it's truly acting like nothing. As time goes on, people are continually affected by countless things, and the mixture of various factors churns and churns. So cycles occur depending on which things are dominant day by day. Good days follow bad days follow good days follow bad days. With or without the placebo, a good day might still have been coming soon. The good day was a subtle coincidence. This is why testing only once, on one subject, shouldn't be conclusive. Or the subject could've had an abnormally unpleasant day not long ago, and then they had a very average day. 

A second possibility of placebo activity is that the subjects' awareness of it cued them to spend extra effort seeking, noting, and recalling good signs, as well as brushing aside bad signs. It's like telling someone to look up and see a cloud that's shaped like a horse; they might have said the cloud looked like something else if they'd seen it first. Or it's like asking them whether they're sure that they didn't witness a particular detail in the incident that happened yesterday. Their expectations were raised, so perception and memory were skewed. This tendency works by indirectly coloring their reactions to stimuli. So of course it's applicable to subjective outcomes, i.e. just generally feeling better. As anyone would expect, placebos score consistently higher in medical trials for subjective outcomes such as temporary pain relief than in trials for objective outcomes such as shrinking tumors. 

On the other hand, placebos' subjective gains point to a valuable principle. When root causes don't have swift solutions, enhancing the quality of someone's experience of the symptoms is still both feasible and worthwhile. Regulating attention and maintaining a balanced perspective are excellent mitigation strategies. Deflecting consciousness in a productive direction is an ability that can be developed. If that's too ambitious, then at least indulging in positive distraction will help. Shrewd choices about what to minimize and what to magnify lead to a definite, often underrated boost in mood. And it doesn't require lies and placebos.

The last possibility of a placebo's inner workings is that it affects the subject's brain, and then the alteration in the subject's brain adjusts the behavior of other organs. Unfortunately, the amount of control by this route is frequently misunderstood. For example, mental pictures and verbal affirmations don't "will" the immune system into doubling its effectiveness (though an overactive immune system would be a problem too). Keenly wanting doesn't telekinetically rearrange the relevant particles to match.

Nevertheless, a few types of crude brain-body governance are undeniable. These are called...emotions. The body rapidly responds to calm, agitation, fear, aggression, sadness. It's stressed or not stressed. Regardless of the cause being fictional or not, large or small, sudden or ongoing, vivid or abstract, the effect is that comparable signals flow instinctively from the brain to the rest of the body. On the basis of stopping and/or starting the flow of disruptive signals at the source, a placebo's power to bring about a tangible change isn't a total mystery. It'd be more surprising if a substantial reduction in emotional chaos didn't have desirable consequences for the subject and their lives.

These possible explanations for placebos correspond to categories of why people are apparently satisfied by the interventions of their preferred incorporeal entities. The first corresponds to lucky timing. Circumstances were about to brighten with no intervention, so a nonexistent intervention was simply sufficient. The second corresponds to slanted judgment. The thought of the intervention prods the believer to fixate on the upsides and rationalize the downsides. They look harder because they presume that the intervention they believe in had to have done something. The third corresponds to the physical side effects of believing in the intervention. If holding to the belief makes the believer more centered, slower to jump into unhealthy and unwise decisions, and closer to a group of supportive believers, then its rewarding side effects for the believer are substitutes for the missing rewards of the unreliable content of the belief.

One final comment comes to mind. Of all the statements made about placebos, the most curious is the proposal to try to achieve the "placebo effect" in regular clinical practice. To make the prescriptions be ethically acceptable, the recipient is fully informed about what they're getting! This is like the question of why I didn't keep participating in my religion after I realized that it wasn't, um, accurate. The retort is that I lost my motivation to bother taking a pill that I knew was only a placebo.

Saturday, August 19, 2017

advanced friend construction

There's no shortage of unflattering, childlike comparisons to irritate the religiously devout. I know this from both my present position and also when I was on the receiving end. For example, they're caught up in incredible fairytales, or they're hopelessly dependent on the support of an ultimate parental figure, or they're too scared of social pressure to admit that the emperor has no clothes on.

But for today I'm interested in another: that they never outgrew having an imaginary friend. They're obligated to immediately deny this jab because, of course, their particular god isn't a human invention. But I wonder, only half-jokingly, if there's a second strong reason for them to feel offended by the comparison. It echoes the irritation of a forty-year-old when intense dedication to building intricate scale models is equated with an uncomplicated pastime for kids.

Simply put, they aren't casual, sloppy, or immature with what they're doing. They're grown adults who are seriously committed to performing excellent and substantial work, thank-you-very-much.

They most emphatically aren't in a brief juvenile phase. They apply great effort and subtlety to the task of maintaining their imaginary friend. (I should note once again that I'm narrowly describing the kind of religiosity that I've been around, not every kind there is.) They often thank it in response to good events. They often plead for help from it in response to bad events. They study sacred writings about it. They recite and analyze its characteristics. They develop an impression of its personality. They sing about its wonderfulness and accomplishments ("what a friend we have..."). They compare notes and testimonials with other people who say they're dear friends with it too. They sort out which ideas of theirs are actually messages sent from it. They apologize to it when they violate its rules. They attempt to decode its grand plan via the available scraps of clues.

The amount of toil might prompt outsiders to question why a non-imaginary being's existence is accompanied by such a demanding project. This reaction is sensible but fails to appreciate the great opportunity it presents for massive circular reasoning. Because a typical, straightforward imaginary friend doesn't present a large and many-sided challenge, the follower's endless striving indicates that theirs must not be in that category. Why would there be elaborate time-honored doctrines, requiring a sizable amount of education and debate, if theirs were just imaginary all along?

Furthermore, they may point to an additional huge difference that's much more perceptible to them than to an outsider looking in: theirs isn't solely a nice friend to fulfill their wishes and never disagree with them. Theirs is an autocrat of their thoughts and behavior ("lord"). It's far from qualifying as a friendly companion in some circumstances. It sees and judges everything. It forces them to carry out acts (good or bad) which they'd rather not. It dictates a downgraded view of them even as if dictates ceaseless adoration of itself.

All the while, to nonplussed observers they appear to be inflicting psychological self-harm. Or as if something unseen is invading them and turning their own emotions into weapons against themselves. An outrageous parallel of this startling arrangement is the pitiful Ventriloquist in the twisted "Read My Lips" episode of Batman: The Animated Series. He carries and operates a dummy named "Scarface", which looks and talks like an old-fashioned gangster boss. They treat each other like separate individuals. They don't seem to know each other's thoughts. Scarface habitually orders his tolerant underlings to speak to him, not to the mostly quiet and unmoving ventriloquist—whom he calls "the dummy". He's always in charge. He gets angry when the ventriloquist tries to contribute to conversation.

The utter absurdity is that the ventriloquist is the sole physical vehicle for the second personality Scarface, yet he's not immune from the paranoiac's hostility. His bully's existence would be totally impossible without his constant assistance. He's in confinement in his last scene, after Batman has prevailed and the dummy is demolished. And he's keeping himself busy carving a replacement...

I realize this parallel is dramatic in the extreme, although I'd note that the gap between it and some people's self-despising religious mentality is unfortunately smaller than it should be. Generally their creation isn't as uncaring or domineering as Scarface. But nor is it as tame and passive as an imaginary friend. For them, it blooms from out of the soil of their brain activity into a functioning character who exhibits specific qualities. It gathers up several sentiments in a coherent form. It's built from aspects of the self. It starts as a bare outline pencil sketch, then it's repeatedly redrawn and colored in section by section. It takes on a virtual life of its own. Over time, the person's brain manages to acquire a "feel for" the character. Thereafter, even without voluntary control, it can habitually compute the character's expected opinion. The character is an extra "brain program" that's loaded and ready, and like a memory it springs up whenever something activates it. The more single-minded someone is about the character, the greater number of triggers it will have.

The curious and thoroughly mockable catchphrase "What Would Jesus Do?" is one example of intentionally invoking an embellished character to answer a moral question. People similarly use these abilities when they infer that someone they know well would've enjoyed a joke or a song. This is simple empathy redirected in an abstract direction through the flexibility of human intelligence. These common abilities to simulate human or human-like characters—"Theory Of Mind" (ToM) is a well-known label—are the natural outcome of humans' evolutionary advantage of deeply complex social interaction. Noisy movie audiences confirm every day that characters certainly don't need to be nonfictional to be intuitively comprehended and then cheered or booed.

Sometimes, when people exit a religious tradition like the one I did, they may comment that they went through an emotional stage which resembled losing a friend. After spending years of advanced construction on the "friend" they lost, their level of broken attachment is genuine. For me personally, that happened to not be as problematic. I wasn't as...close with my imaginary friend as many are. So my puzzlement overtook my former respect early in the journey out. "Who are you, anyway? You don't make any sense anymore. I think I don't know you at all. Maybe I never did."

Saturday, July 15, 2017

upon further contemplation

I'm guessing this won't come as a shock: the sweeping advice that more faith would've prevented my unbelief fails to dazzle me. From my standpoint it seems equivalent to the bullheaded instruction, "You should have never let yourself revise what you think, no matter what well-grounded information came along, or how implausible or problematic your preferred idea was shown to be." I knowingly discarded the beliefs after open-eyed judgment. If more faith is seriously intended as a defense tactic, then it has a strong resemblance to the ostrich's. (The inaccuracy of the head-burying myth can be ignored for the sake of lighthearted analogy...)

I'm more entertained, but no more convinced, by specific recommendations that would've fortified my beliefs. Contemplative prayer's assorted techniques fit this category. These are said to improve the follower's soul through the aid of quietness, ritual, reflection, and focus. The soul is methodically opened up to unearthly influence. It's pushed to develop an engrossing portrayal of the supernatural realm. It's taught to frequently note and gather signs of this portrayal's existence. The edifying periods of intense concentration might be guided by spiritual mottoes, textual studies, mental images, dogmas. Intervals of fasting and solitude might be employed to heighten attentiveness. Presumably, all this effort goes toward two interlocking goals. First is an inspiring appreciation of God. Second is often having in-depth, warm, productive connections with God, at both scheduled and unscheduled times. Zealous contemplators like to declare that they're "in a relationship, not a religion" and that they walk and talk with God.

Nevertheless, I wouldn't rashly accuse them of telling deliberate lies about the phenomena their techniques lead to. Aside from the embellishment and reinterpretation that inevitably slip in, I don't assume that they're fabricating their entire reports. Dreams aren't perceptible outside of the dreamer's brain either, but that doesn't imply that no dreaming occurred. When they say they sense God, I'm willing to accept that their experience of sensing was triggered in them somehow. If an experience roughly corresponds to the activation of a brain region, then purposely activating the region could recall the experience. Anywhere in the world, a whiff of favorite food can conjure a memory of home.

The actual gap is between the meaning that they attribute to their contemplative techniques and the meaning that I attribute. They claim that they're harnessing the custom-made age-old wisdom of their particular tradition to come into contact with their unique God. But when I reexamine their techniques in a greater context, I can't avoid noticing the many close similarities with sophisticated psychological training. I'm referring to training by the broadest nonjudgmental definition. We're social creatures who have highly flexible brains. We're training each other and ourselves, by large and small degrees, constantly though not always consciously, for a host of admirable or despicable reasons. Where they perceive specialized paths to divinity, I perceive the unexceptional shaping of patterns of behavior and thinking.

No matter the topic, a complicated abstraction is usually a challenge for psychological training. Extra care is needed to ensure that it's memorable, understood, relevant, and stimulating. A number of ordinary exercises and factors can help. Undisturbed repetition is foremost. Obviously, over the short term it stops the abstraction from promptly fading or being pushed out by distractions. But for the knowledge to persist, undisturbed repetition shouldn't be crushed into a single huge session. It should be broken up into several, preferably with evenly spaced time in-between. Each should build on top of the previous. Old items should be reviewed before new items. It also helps when the material is itself put in a repetitive and thoughtful form, in which parts of the new items are reminiscent of parts of the old items. Mnemonics, rhymes, and alliteration have benefits other than stylistic flourishes.

Better still is to supplement undisturbed repetition with active processing. Asking and answering questions about the abstraction forces it to come alive and be comprehended. The questions should be decisive and piercing, not vague, superficial, and easy. The aim is greater clarity. A clear abstraction appears surer and realer than a hazy one. Its familiarity increases as it's meditated on and reused. A secondary effect of active processing is to establish its links to other ideas. Its distinguishing characteristics are exposed. Its boundaries are drawn. It ceases to be a mysterious, remote, solitary blob. Instead it's nestled firmly in its known position by neighboring ideas: it's a bit like this and a bit unlike that.

If possible, the active processing should include personalizing the abstraction. A person may or may not be permitted to adapt it to suit themselves. But in either case, they can translate it into their own words and the symbols they find significant. And they can try to pinpoint informative overlaps between it and their larger perspective. Applying it to a their vital concerns instantly raises its value in their thoughts. Lastly, to the extent that it influences their individual choices, it accumulates a kind of undeniable role in their personal history from then on.

Personalizing an abstraction works because brains have an innate talent for pouncing on information that affects the self. Stories and sense perception are two more brain talents that can be successfully targeted. The brain already has skills for absorbing concrete narratives and sensations. A compelling story is superior at conveying the qualities of someone or something. Visualizing something abstract aids in delivering it into consciousness, regardless of whether the visualization is merely a temporary metaphor. Paradoxical as it sounds, attaching many little sensory details can sometimes be beneficial for retention. Vividness enables an abstraction to grab and hold a bigger slice of awareness. Excessively minimal or dull descriptions engage less of the brain. Although a concise summary is quicker to communicate than a series of differing examples, the series invokes sustained attention. The multiple examples present multiple chances, using several variations, to make at least one enduring impression.

For mostly the same reason, adding a factor of emotion works too: it's a "language" which is built into the brain. It marks information as important. It boosts alertness toward an abstraction. Meanwhile, the flow of associations pushes an understanding of its parts. The parts to be opposed—problems or enemies—are enclosed by a repelling frame. The parts to be welcomed—solutions or allies—are enclosed by an appealing frame. A thorough bond between emotion and an abstraction can last and last. Its potency could potentially rival or exceed the potency of a bond to a tangible object. Such objects can be hobbled by the fatal shortcomings of realistic weaknesses and complex mixes of advantages and disadvantages, which bind to conflicting emotions.

It so happens that all these considerations feature prominently in the contemplative techniques that would've hypothetically sheltered me from unbelief. That's why I conceded earlier that diligent practice of the techniques probably does fulfill the promise...according to the contemplator. When psychological training is carried out well, I'd expect it to be effective at introducing and reinforcing craftily constructed abstractions. The end results are that numerous stimuli spontaneously give rise to the cultivated ideas. The ideas become the lenses for observing everything else. Dislodging them to make room for contrary thoughts starts to feel, um, unthinkable. Contemplators see themselves producing subtler insight into the being that created them and provided them an Earth to live on. People like me see them producing subtler refinements of the being they're continuously creating and for whom they've provided a brain to "live" in.

However, contemplation is doomed to be a flawed source of proof because it has no essential differences from the "more faith" remedy I first criticized. It often functions independently of tested realities outside the brain's. When it's relying on imaginative modes, it operates separately from rigorous argumentation, pro or con. If I'd been more accomplished at it, would my escape have been longer and wobblier? I suppose. Yet I question that I could've fended it off forever.