Friday, May 18, 2018

the column of sieves

In the midst of controversial debates, it's too common for either side to demand a "smoking gun" proof from the other. They insist on an item of evidence that's easily understood, beyond all doubt, and strikingly dramatic. They need to be impressed before they'll budge an inch. (Of course, whether or not their demand is sincere is a separate question.)

The strategy works in the heat of a debate because grand indisputable tests are relatively rare. Tests that are actually workable tend to have built-in limitations. Ethical reports of such tests attach sober probabilities to the corresponding conclusions. Imperfection is normal. Tests have holes. The genuine reality could "slip through". A favorable result might still be wrong.

Nevertheless, the big picture is more hopeful. Uninformed people often fail to grasp the total value of an array of imperfect tests. Although one test has known weaknesses in isolation, combining it with other tests can make a vast difference. If one test's holes are like the holes in a sieve (...a sieve with fewer holes than a real sieve would have...), then an array of favorable tests is like a line of sieves arranged in a column. Through this arrangement, anything that slipped through one sieve's holes would probably not slip through another's as well.

The rules of probability correspond to this analogy. As long as the tests are statistically independent, i.e. running one test doesn't affect another, the likelihood is very low that all the favorable results of the array of tests are simultaneously false. That coincidence would be unlikely. One example is five successful tests, and each success had a one in six chance of being false. Having every success be false would be comparable to getting only sixes in five rolls of a die.

Unfortunately, the convincing persuasiveness of an array of tests is a much more difficult story to tell. The focus is abstract and awkward. In place of a lone heroic test to admire, there's a team of mediocrity. It's broad and messy, like reality. Sleek certainty is easier to find in fantasies.

And the situation is generally less clear than this simplified portrayal. The mathematics are more complex and the conclusions more arguable. Perhaps the test results aren't all successes. Or opponents may suggest that the probability that a test is wrong should be greater; they may assert that it's far more flawed than it was said to be. Or, if they're more subtle, they may try to claim that the tests in the array just have identical holes ("pervasive blind spots"), so multiple tests aren't an improvement over one.

Attacks on the details would be progress compared to the alternative, though. Better that than a childish sweeping rejection of tests that can't be faultless. Perhaps the usual tests are suspect, but the recommendation is to round up all these usual suspect tests anyway. Four honest approximations reached through unconcealed methods deserve more credit than an "ultimate answer" reached through methods which are unknown or secret or dictatorial or unrepeatable.

Tuesday, February 27, 2018

verifiable definitions for everybody

It's a frequent mistake to try to elevate one culture's notions into universal laws. That's why I don't mind considering the charge that the philosophical stance I encourage is a "purely Western invention".  If it had no chance at broader relevance or appeal, I'd rather know than not.

Nevertheless, at least the core principle isn't a Western-only concept. It's binding an idea's meaning and accuracy to the "shadow" it should cast on outcomes, i.e. realities which are exposed via human actions. (These actions might be mental and/or passive, such as calculating or unbiased observing.) The important instructions to follow this principle properly aren't especially Western either: to be diligent, honest, and fair throughout the tasks of gathering up and evaluating the outcomes that supposedly reveal the corresponding idea's meaning/accuracy; to be alert to any outcomes that are equally supportive, or more supportive, of competing ideas; and most difficult of all, to recognize the absence of the idea's shadow on outcomes that really should've been affected if the idea were meaningful and accurate.

The reason I think this stance is widely applicable is because of the widely occurring human problems that it was created to address. When one person is trying to narrowly determine what another person is talking about, comparing the idea to specific actions and outcomes helps. "I'm talking about the position you would reach by traveling to latitude and longitude coordinates X and Y in decimal degrees...or the position you would see on a map by finding the intersection of them." A second problem might be trying to decide whether another person is using differing ideas to express a meaning that's close to identical. A third problem might be trying to estimate an idea's overall feasibility by estimating the feasibility of the actions associated with it. A fourth problem might be trying to deflate another person's deception (or ignorant self-delusion) through noticing that their idea lacks adequate rationales that anyone can check. Wherever and whenever there are groupings of people whose symbolic communication empowers them to refer in detail to things that aren't here right now, they're empowered to carelessly refer in detail to things that aren't so.

Actually, speaking from where I sit, it's debatable how widely this stance is firmly embraced within any of the cultures that are said to have a Western heritage. It's in a continual contest with other cultural currents, some of which have the backing of social pressure and a formidable history. The effect is that people within these cultures have also long used alternative "methods" such as superstition or the opaque rulings of unchallenged authority figures. True, the exact content of the shoddy ideas changes over time, because fashions—and blind spots—change. But the mechanism is depressingly consistent. For instance, anxiety about counteracting bad spirits gives way to anxiety about counteracting bad energy (or counteracting minute quantities of bad toxins?). I'm forced to confess that even many of the members of my own culture aren't in favor of the stance I preach.

A more general truism is at work here: sorting both ideas and cultures by a single Western/non-Western split is too coarse-grained to reliably predict people's responses. There are significant differences among all the cultures which are said to have a Western heritage. The result of these differences is that my stance is more welcomed in some of them than in others. And it's more welcomed in some subcultures more than others.

Just as receptiveness varies throughout the group of Western cultures, it varies throughout non-Western cultures as well. In the same way that everybody placed on one side cannot be assumed to be completely open to it, everybody placed on the other side cannot be assumed to be completely closed to it. Despite eager attempts both to neatly assign an idea to one side and then to emphasize a rift between the sides, large numbers of people on either side have always been willing and able to absorb the bits they like from the ideas that reach them. Voluntary exchange of useful ideas has been going on for as long as the voluntary exchange of goods has. The cultural divide might be a good representation of the accidental misunderstandings that can happen so easily...but it's hardly an impassable barrier in practice.

My expectations of finding common ground don't stop there. I suspect that "they" might find that portions of the idea feel familiar. Worthy ideas have the tendency of springing up independently in several times and places, although the triggering situations and the forms of expression are of course unique. In this sense, I highly doubt that the origins of stances like mine have always been in the cultures in the Western pigeonhole. As I stated earlier, the most abstract debates about meaning and accuracy still have natural motivations. People in non-Western cultures must have developed their own versions of the triangle of ideas/actions/outcomes...though perhaps less formally or less fully. If so then their reaction will be "Oh, you're describing a view that has a few strong resemblances to ___ in my culture" instead of "You're describing a view which is entirely alien to anything that I've known before".

The risk of focusing on culture is to miss the other levels in which opposing ways of thinking clash. Clashes at the level of culture are obviously pertinent, but so are the clashes at the levels of the individual's everyday struggles with their own thoughts. The truth is that my upbringing in a Western-categorized culture wasn't sufficient to stop me from restricting the introspective reach of this "very Western stance" for years. Else it would've clashed with the unsound beliefs that I shielded from its standards. The turning point came when I aimed it at my base assumptions; I stopped treating it as an optional tool suitable solely for limited areas of knowledge. As tough as it might be to introduce a way of thinking to someone, it's not as tough as the next challenge of convincing them to value a rigorously examined, intellectually coherent life. The end goal isn't to convert every part of their culture to be more "Western" (how boring), it's to equip them to better analyze the parts of their own many-sided cultures for themselves and to free their minds if they wish. Some Westerners like I have had the same task.

Saturday, January 06, 2018

Force-curious but not Jedi?

By springing surprise after surprise on its fanatical audience, Star Wars: The Last Jedi has triggered an avalanche of differing reactions. I for one was neither passionately against or in favor of it. My personal interest in anything related to Star Wars has been lukewarm for years. I liked it enough while I was at the theater, but I didn't have the urge to go again. I'm less disturbed by its own plot twists than I am about the seemingly slim possibilities that it left for the next movie. What will the heroes do now to save themselves and defeat their foes? What mysteries remain unsolved? Perhaps the third Star Wars movie of recent years will be subtitled "The Search for Lando".

The movie managed to stun me with a few jaw-dropping scenes, though. All of these were playing with one revolutionary concept: the Jedi voluntarily becoming extinct. I need to hastily add that the dialogue pointedly clarifies, however, that the hypothetical extinction would apply to the Jedi "religion" alone. The Force itself could never become extinct. As a result, individuals who have Force awareness and/or abilities would still be around, too. Without Jedi scholarship or apprenticeship to guide them, their belief status would be Force-curious but not Jedi. They'd be non-Jedi or "Nons" for short.

A Star Wars universe of Nons offers plenty of fuel for speculation. Rather than benefit from the findings (and mistakes!) of their predecessors, each generation of them would fumble anew at understanding elementary topics and performing novice feats. They might opt to pick and choose from a hundred contradictory understandings of the Force. With their meager knowledge, they'd try to judge for themselves whether particular actions were "light-side" or "dark-side". Their ignorance of the dangers would inevitably result in a few—maybe more than a few—using the Force for selfish aims and eventually having a destructive effect on everything around them. In addition, some of them would be motivated by their loyalty to the specific groups they identify with to use the Force purely to advance the glory of their group instead of the common (galactic) good.

Their contact with the Force would be outside of time-tested paths. They wouldn't be weighed down by the irritating restrictions of petty Jedi taboos. They'd be free of the drudgery of antiquated teachings and heavy-handed institutions or councils or elders. On religion surveys they'd check the box for "no affiliation". If asked for more details, they'd say that they're spiritual but not religious. If pestered what they're spiritual about, they'd say that they can sorta sense an invisible Force of benevolence when they reach out with their feelings. And they can lift rocks or janitorial tools through telekinesis.

I suppose that this is well-aimed at the current cultural context. My guess is that many would be open to seriously considering that informal laid-back Nons going back to the basics of transcendence would be an improvement from haughty, intrusive, inflexible, child-indoctrinating Jedi who oversee big budgets and vast initiatives in a mega-Temple. Organizations, religious or otherwise, are loathed for how they can be abused to empower terrible leaders and control the members.

That's not all. In some, this attitude also operates at a deeper philosophical level. They loathe committing fully to definite statements of their own beliefs. In the spiritual realm, and almost everywhere else, it's thought to be more acceptable for everyone to be on their separate individualized journeys. Clear and verifiable thinking is devalued. Or, worse, it's purposely shunned to ensure that no opinion can ever be viewed as more accurate than another. That's why it's optional to confidently ground one's abstract beliefs in concrete actions, facts, duties, etc.

I doubt I'm spoiling the movie by revealing that, in the end, it didn't overturn one of Star Wars' core parts. It only proposes and discussed the shocking concept of a complete handover from Jedi to Nons. Nevertheless, it shows the pluses and minuses that stand out to me when I listen to the mushy words of the spiritual but not religious. If they're always able to think whatever they wish about "spirituality", then the odds are excellent that they're merely imagining a fluid external "cause" that they fill with their wholly subjective experiences. By its nature this phantom cause couldn't have an innate form or shape to take into account. But if they object to this impolite characterization and insist that their spooky spirit is as real as the Force is within the Star Wars fictional universe, then why does It exist by vaguer rules than every other real thing? Unlike those other real things, why can't they or anyone speak as conclusively about It in ways that are compatible with neutral confirmation...or disproof?

Saturday, December 09, 2017

hunting for license

Followers of materialistic naturalism like myself have a reputation of scoffing at wishful thinking. We're pictured as having an unhealthy obsession on bare inhuman facts. Despite that, I'm well aware that one of my own ideals is closer to a wish than to a reality a lot of the time. I refer to the recommended path to accurate thoughts. It starts with collecting all available leads. After the hard work of collection comes the tricky task of dispassionately sorting and filtering the leads by trustworthiness. Once sorting and filtering are done, then the more trustworthy leads form the criteria for judging among candidate ideas. The sequence is akin to estimating a landmark's position after taking compass bearings from three separate locations, not after a single impromptu guess by eye.

If I'm reluctantly conceding that this advice isn't always put in practice, then why not? What are people doing in its place? We're all creatures who by nature avoid pain and loss, including the pain of alarming ideas and the loss of ideas that we hold. That's why many people substitute the less risky objective of seeking out the leads which would permit them to retain the ideas they cherish for reasons besides accuracy. They're after information and arguments to give them license to stay put. As commentators have remarked again and again, the longest-lasting fans of the topics or debates of religious apologetics (or the dissenting counter-apologetics) are intent on cheering their side—not on deciding to switch anytime soon.

Their word choices stand out. They ask whether they can believe X without losing respect, or whether they must believe Y. Moreover, they significantly don't ask which idea connects up to the leads with less effort than the others...or which idea introduces fewer shaky speculations than the others. The crux is that their darling idea isn't absurd and it also isn't manifestly contrary to one or more indisputable discoveries. They may still care a bit about its accuracy relative to competing ideas, but by comparison this quality is an afterthought. They're gratified as long as the idea's odds exceed a passable threshold. They can honestly envision that the idea could be valid in some sense. The metaphor isn't searching for a loophole but snatching up every usable shim to fix the loose fit of their idea within the niches that need filling. The undisguised haphazardness of it at least ensures that it's adaptable and versatile.

Of course, at root it's a product of compromise for people who are trying to navigate all the forces which push and pull at them. It soothes them about the lower priority they're consciously or unconsciously assigning to accuracy. By scraping together an adequate collection of leads to make an idea viable, they're informing everyone, including themselves, that their selection of the idea isn't unreasonable. If the pursuit of authenticity were a game, they'd be insisting that their idea isn't out of bounds.

Irritatingly, one strange outcome of their half-cured ignorance might be an overreaction of blind confidence. The brashest of them might be moved to declare that their idea is more than just allowable; it's a first-rate "alternative choice" that's as good or better than any other. By transforming it to a matter of equal preference, they can be shameless about indulging their preference.

In isolated cases it really could be. But the relevant point is that, successful or not, the strategy they used was backwards. They didn't go looking honestly for leads to the top idea. All they wanted was greater license to keep the idea they were already carrying with them while they looked. In effect, thinking in these terms motivates more than a mere bias toward noticing confirmations of their idea: they're sent on the hunt. Needless to say, they can't expect anybody else to be as enchanted by their hunt's predictable findings.

Friday, December 01, 2017

trading posts

From time to time I'm reminded that it's misguided to ask, "Why is it that faulty or unverified ideas not only survive but thrive?" In my opinion the opposite question is more appropriate: "What has prevented, or could prevent, faulty or unverified ideas from achieving even more dominance?" The latter recognizes the appalling history of errors which have seized entire populations in various eras. The errors sprout up in all the corners of culture. The sight of substandard ideas spreading like weeds isn't exceptional; it's ageless.

The explanations are ageless too. One of these is that the ideas could be sitting atop a heaping pile of spellbinding stories of dubious origin. After a story has drawn its audience toward an unreal idea, it doesn't vanish. Its audience reproduces it. It might mutate in the process. When it's packaged with similar stories, its effect multiplies. Circles of people go on to trade the stories eagerly, because when they do they're trading mutual reassurance that they're right. There's a balance at work. Possessing uncommon knowledge is thrilling, especially when it's said to be both highly valuable and "suppressed". But the impression that at least a few others subscribe to the same arcane knowledge shields each of them from the doubt that they're just fantasizing or committing an individual mistake.

As is typical, the internet intensifies this pattern of human behavior rather than creates it out of nothing. It's a newer medium, albeit with a tremendous gain in convenience and visibility compared to older forms. In the past, the feat of trading relatively obscure stories depended on printed material such as newsletters or pamphlets or rare books. Or it happened gradually through a crooked pipeline of conversations that probably distorted the "facts" more and more during the trip. Or it was on the agenda of meetings quietly organized by an interested group that had to already exist in the local area.

Or a story could be posted up somewhere for its intended audience. This method especially benefited from the internet's speed and wide availability. Obviously, a powerful computer (or huge data center full of computers) with a memorable internet address can provide a spectacular setting for modern electronic forms of these posted stories—which have ended up being called "posts". Numerous worldwide devices can connect to the published address to rapidly store and retrieve the posts. Whether the specific technology is a bulletin board system, a newsgroup, an online discussion forum, or a social media website, the result is comparable. Whole virtual communities coalesce around exchanging posts.

Undoubtedly, these innovations have had favorable uses. But these also have supplied potent petri dishes for hosting the buildup of the deceptive, apocryphal stories that boost awful ideas. So when people perform "internet research", they may trip over these. The endlessly depressing trend is for the story that's more irresistible, and/or more smoothly comprehended, to be duplicated and scattered farther. Unfortunately that story won't necessarily be the most factual one. After all, a story that isn't held back by accuracy and complex nuance is freer to take any enticing shape.

It might be finely-tuned in its own way, though. A false story that demands almost no mental effort from its audience might provoke disbelief at its easiness. It would have too close of a resemblance to a superficial guess. That's avoided by a false story that demands a nugget of sustained but not strenuous mental effort—or a touch of inspired cleverness. It more effectively gives its audience a chance of feeling smug that now they're part of the group who're smart and informed enough to know better.

I wish I knew a quick remedy for this disadvantage. I guess the superior ideas need to have superior stories, or a mass refusal to tolerate stores with sloppy substantiation needs to develop. Until then the unwary public will be as vulnerable as they've ever been to the zealous self-important traders of hollow stories and to the fictional "ideas" the stories promote.

Monday, November 27, 2017

no stretching necessary

I'm often miffed at the suggestion that my stance is particularly extremist. According to some critics, materialistic naturalism is an excessive interpretation of reality. It's a stretch. They submit that its overall judgment is reaching far beyond the tally of what's known and what isn't. It's displaying its own type of dogmatism: clinging too rigidly to stark principles. It's declaring a larger pattern that isn't really present.

This disagreement about who's stretching further is a useful clue about fundamental differences. It highlights the split in our assumptions about the proper neutral starting point of belief. When two stances have little overlap, the temptation is to hold up a weak mixture of the two as the obvious default, i.e. the most minimal and impartial position. Like in two-party politics, the midpoint between the two poles receives the label of "moderate center". As the poles change, the center becomes something else. I gladly admit that according to their perceived continuum of beliefs about the supernatural domain's level of activity and importance, mine lies closer to one of the clear-headed ends than to something mushier.

From my standpoint, their first mistake is imposing the wrong continuum. Applying it to me is like saying that the length of my fingernails is minuscule next to a meter stick. Although a continuum can be drawn to categorize me as having an extreme stance, the attempt doesn't deserve attention until the continuum itself is defended. It's not enough to decree that people more or less radical depending on how much their stance differs from yours. For a subjective estimation to be worth hearing, the relative basis for the estimation needs to be laid out fairly. Part of that is acknowledging the large role played by culture. Rarities in one culture might be commonplace in a second. Many times, perhaps most of the time, the customary continuum of "normal" belief isn't universal but a reflection of the setting.

The good news is that the preferable alternative isn't strange or complicated: it's the same kind of continuum of belief that works so wonderfully in myriad contexts besides this one. This kind begins with the stance that people have before they've heard of the belief. This beginning stance is perfect uncertainty about the belief's accuracy. It's at 0, neither positive nor negative.

Yet until the scales are tipped for high-quality reasons, the advisable approach is to continue thinking and acting on the likely guess that the belief isn't to be trusted. This is a superb tactic simply because unreliable beliefs are vastly cheaper to develop than reliable beliefs—the unreliable should be expected to outnumber the reliable. In the very beginning, before more is known, to make an unjustified leap to a strongly supportive stance about the belief...would be a stretch.

That's only one point of the continuum. But the rule for the rest is no more exotic: the intensity of belief corresponds to the intensity of validation. Information raises or lowers the willingness to "bet" on the belief to serve some purpose. The information accumulates, which implies that new information doesn't necessarily replace older information. Each time, it's important to ask whether the belief is significantly better at explaining information than mere statistical coincidence.

A popular term for this kind of continuum is Bayesian. Or, to borrow a favorite turn of phrase, it could be called the kind that's focused on fooling ourselves less. It's a contrast to the myth-making kind of continuum of belief in which stances are chosen based on the familiar belief's inherent appeal or its cultural dominance to each individual believer. At the core, Bayesian continua are built around the ideal of studiously not overdoing acceptance of a belief. This is why it's a futile taunt to characterize a Bayesian continuum stance as a fanatical overreach. The continuum of how alien a stance feels to someone is entirely separate. For that matter, when the stance is more unsettling largely as a result of staying strictly in line with the genuinely verified details, the reaction it provokes might be an encouraging sign that pleasing the crowd isn't its primary goal. If someone is already frequently looking down to check that they're on solid ground, they won't be disturbed by the toothless charge that they've stepped onto someone else's definition of thin ice.

The accusation has another self-defeating problem: the absolutist closed-mindedness that it's attacking isn't applicable to most people of my stance. All that's needed is to actually listen to what we say when we're politely asked. Generally we're more than willing to admit the distinction between impossible and improbable beliefs. We endorse and follow materialistic naturalism, but we simultaneously allow that it could be missing something frustratingly elusive. We could be shown that it needs to be supplemented by a secondary stance.

But by now, after a one-sided history of countless missed opportunities for supernatural stuff to make itself plain, and steadily shrinking gaps for it to be hiding in, the rationale would need to be stunningly dramatic. It would need to be something that hasn't come along yet, such as a god finally speaking distinctly to the masses of planet Earth. Corrupted texts, personal intuitions, and glory-seeking prophets don't suffice. The common caricatures of us are off-target. We aren't unmovable. We'd simply need a lot more convincing to prod us along our Bayesian continua. (The debatable exceptions are hypothetical beings defined by mutually contradictory combinations of characteristics; logic fights against the existence of these beings.)

There is a last amusing aspect of the squabble over the notion that materialistic naturalism stretches too far to obtain its conclusions. Regardless of my emphatic feelings that intermediate stances aren't the closest match to the hard-earned knowledge that's available, I'm sure that I'm not alone in preferring that more people followed these in place of some others. While I disagree that their beliefs are more plausible than mine, deists/pantheists/something-ists upset me less than the groups whose gods are said to be obsessed with interfering in human lives. I wouldn't be crushed if the single consequence of losing were more people drifting to the supposed "middle ground" of inoffensive and mostly empty supernatural concepts.

Because outside of staged two-person philosophical dialogues, it's a short-sighted strategy to argue that my stance presumes too much. It'd only succeed in flipping people from mine to the arguer's stance after they added the laughable claims that theirs somehow presumes less than mine, and that it presumes less than deism/pantheism/something-ism...

Saturday, November 11, 2017

supertankers and Segways

Back when my frame of mind was incorrect yet complacent, the crucial factor wasn't impaired/lazy intelligence or a substandard education. It wasn't a lack of exposure to influences from outside the subculture. The stumbling block was an extensive set of comforting excuses and misconceptions about not sifting my own supernatural ideas very well for accuracy...combined with the nervous reluctance to do so. It was an unjustified and lax approach to the specific ideas I was clutching.

I wasn't looking, or not looking intently enough, at the ideas' supporting links. Like the springs around the edge of a trampoline, an idea's links should be part of the judgment of whether its definition is stable and sturdy under pressure. If it's hardly linked to anything substantial, or the links are tenuous, its meaningfulness deserves less credit.

This metaphor suggests a revealing consequence of the condition of the ideas' links. When the links are tight and abundant, the ideas are less likely to change frequently or radically. An idea that can be abruptly reversed, overturned, rearranged, etc. is more consistent with an idea that's poorly rooted. Perhaps its origin is mostly faddish hearsay. If it can rapidly turn like a Segway for the tiniest reason, it's not showing that it's well-connected to anything that says unchanged day by day.

However, if it turns in a new direction gradually like huge supertanker-class ships, it's showing that its many real links were firmly reinforcing its former position/orientation. Changes would require breaking or loosening the former links, or creeping slightly within the limits that the links permit. By conforming to a lengthy list of clues, an explanation places the same demands on all modifications or substitutions of it. This characteristic is what separates it from a tentative explanation. Tentativeness refers to the upfront warning that it isn't nailed down or solidified. The chances are high that it might be adjusted a lot in the near future in response to facts that are currently unknown.

Although revolutionary developments are exciting, such events call for probing reexamination of the past and maybe more than a little initial doubt. There might be a lesson about improving methods for constructing and analyzing the ideas' links. Why were the discarded ideas followed before but not now? How did that happen?

Amazing upheavals of perspective should be somewhat rare if the perspective has a sound basis. Anyone can claim to have secret or novel ideas that "change everything we thought we knew". The whole point is to specifically not grant them an easy exclusion from the interlinking nature of knowledge. Do their ideas' links to other accurate ideas—logic/proofs, data, calculations, observations, experiments, and so on—either outweigh or invalidate the links of the ideas that they're aiming to replace?

If their ideas are a swerve on the level they describe, then redirecting the bulk of corroborated beliefs ought to resemble turning a supertanker and not a Segway. Of course, this is only an expectation for the typical manner in which a massive revision takes place. It's not a wholesale rejection of the possible need for it. From time to time, the deceptive influence of appealing yet utterly wrong ideas can last and spread for a long time. So it's sensible that the eventual remedy would have an equally large scale. Paradigm shifts serve a purpose.

But they must be exceedingly well-defended to be considered. When part of an idea's introduction is "First forget all you think you know", the shrewd reaction isn't taking this unsolicited advice at face value. It's "The stuff I know is underpinned by piles and piles of confirmations and cross-checks. How exactly does your supposed revelation counter all that?" The apparent "freedom" to impulsively modify or even drop ideas implies that they were just dangling by threads or flapping in the wind to start with.

Thursday, October 26, 2017

unbridled clarity

Sometimes I suddenly notice contradictions between items of common knowledge. If the contradiction is just superficial then it might be presenting an opportunity rather than a problem; harmonizing the items can produce deeper insights. Right now I'm specifically thinking of two generalizations that show up regularly in Skeptic discussions about changes in people's beliefs.

First, solely providing data, no matter how high-quality it is, can be futile due to a backfire effect. The recipient's original viewpoint may further solidify as they use it to invalidate or minimize the data. Especially if they're feeling either threatened or determined to "win", they will frame and contort bare facts to declaw them. The lesson is that more information isn't always a cure-all for faulty thinking.

Second, the rise in casual availability of the internet supposedly lowers the likelihood that anyone can manage to avoid encountering the numerous challenges to their beliefs. Through the screens of their computing devices of all sizes, everyone is said to be firmly plugged into larger society's modern judgments about reality. Around-the-clock exposure ensures that these are given every chance to overturn the stale deceits offered by the traditions of subcultures. So the popular suspicion is that internet usage correlates to decreases in inaccurate beliefs.

At first reading, these two generalizations don't mesh. If additional information consistently leads to nothing beyond a backfire effect, then the internet's greater access to information doesn't help; conversely, if greater access to information via the internet is so decisive, then the backfire effect isn't counteracting it. The straightforward solution to the dilemma is to suggest that each influence is active in different people to different extents. Some have a backfire effect that fully outweighs the information they receive from the internet, while some don't. But why?

I'd say that the factor that separates the two groups is a commitment to "unbridled clarity". Of course, clarity is important for more than philosophical nitpicking. It's an essential concern in many varied situations: communication, planning, mathematics, recipes, law, journalism, education, science, to name a few. This is why the methods of pursuing it are equally familiar: unequivocal definitions, fixed reference points, statements that obey and result from logical rules, comparisons and contrasts, repeatable instructions, standard measurements, to name a few. It's relevant any time that the question "What are you really talking about?" must furnish a single narrow answer...and the answer cannot serve its purpose if it's slippery or shapeless.

If clarity were vigorously applied and not shunned, i.e. unbridled, then it would resist the backfire effect. Ultimately, its methods increase the clarity of one idea by emphatically binding it to ideas which have clarity to spare. A side effect is that the fate of the clarified idea is inescapably bound to the fates of those other ideas. Information that blatantly clashes with them implies clashing with the clarified idea too. When the light of clarity reveals that an idea is bound to observable consequence X, divergent outcome Y would dictate that the idea has a flaw. It needs to be discarded or thoroughly reworked.

Alternatively, the far less rational reaction would be to stubbornly "un-clarify" the discredited idea to salvage it. All that's involved is breaking its former bonds of conspicuous meaningfulness, which turned out to make it too much of a target. In other words, to cease asserting anything definite is to eliminate the risk of being proven incorrect. This is the route of bridled (restrained) clarity. It's a close companion of the backfire effect. Clarity is demoted and muzzled as part of the self-serving backfire effect of smudging the idea's edges, twisting its inner content to bypass pitfalls, and protecting it with caveats.

In the absence of enough clarity, even a torrent of information from the internet can run into a backfire effect. It's difficult to find successful rebuttals for ideas that either mean little in practice or can be made to mean whatever one spontaneously decides. Ideas with murky relationships to reality tend to be immune to contrary details. It should be unsurprising that beliefs of bolder clarity are frequently scorned as "naive" or "simplistic" by seasoned believers who are well-acquainted with the advantages of meager or wholly invisible expectations.

I'm inclined to place a lot of blame on intentional or accidental core lack of clarity about beliefs. But I admit there are other good explanations for why the internet's plentiful information could fail to sway believers. The less encouraging possibility is that, despite all the internet has to offer, they're gorging on the detrimental bytes instead. They're absorbing misleading or fabricated "statistics", erroneous reasoning in support of their current leanings, poor attempts at humor that miss and obscure the main point, manipulative rumors to flatter their base assumptions...

Sunday, October 08, 2017

giving authenticity a spin

I'm guessing that no toy top has led to more furious arguments than the one in the final scene of Inception. The movie previously revealed that its eventual toppling, or alternatively its perpetual spinning, was a signal that the context is reality or a dream. Before this scene, the characters have spent a lengthy amount of time jumping between separate dream worlds. In the end, has the character in the scene emerged back to reality or hasn't he? The mischievous movie-makers purposely edited it to raise the unanswered question.

My interest isn't in resolving that debate, which grew old several years ago. But I appreciate the parallel with the flawed manner in which some people declare the difference between their "real" viewpoints and others' merely illusory viewpoints. They're the true realists and those others are faux realists. They're living in the world as it is, unlike the people who are trapped in an imaginary world. They're comparable to the dream infiltrators who made a successful return journey—or someone who never went in. They've escaped the fictions that continue to fool captive minds. All of their thoughts are now dependable messengers that convey pure reality. They're the most fortunate: they're plugged directly into the rock-bottom Truth.

I'm going to call this simplistic description naive realism. It seems to me that there are more appropriate ways of considering my state of knowledge. And the starting point is to recognize the relevance of the question represented by Inception's last scene. Many, hopefully a large proportion, of my trusted ideas about things have a status closer to real than fantasy. Yet simultaneously, I need to remember the undeniable fact that human ideas have often not met this standard. It's plausible that my consciousness is a mixture of real and fantasy ideas. Essentially, I'm in the ambiguous final movie scene. The top is spinning, but it also seems to have the slightest wobble developing. The consequence is that I can't assume that I'm inhabiting a perfectly real set of ideas.

Nevertheless, the opposite cynical assumption is a mistake too. A probably incomplete escape from fantasy doesn't justify broadly painting every idea as made-up rubbish. The major human predisposition to concoct and spread nonsense doesn't imply that we can only think total nonsense moment by moment. One person or group's errors don't lead to the conclusion that all people or groups are always equally in error. The depressing premise that everybody knows nothing is far, far too drastic. I, for one, detest it.

I'd say that the more honest approach is to stop asserting that someone's whole frame of mind is more real than another's. My preference is to first acknowledge that ideas are the necessary mediators and scaffolding that make sense of raw existence. But then acknowledge that these ideas can, and need to be, checked somehow for authenticity. It's not that one side has exclusive access to unvarnished reality and the other side is drowning in counterfeit tales. Both sides must use ideas—concepts, big-picture patterns, theories—to comprehend things and experiences. So the more practical response is for them to try to sift the authentic ideas from the inauthentic as they're able. The better question to identify their differences is how they defend the notion that their ideas are actual echoes of reality.

The actions that correspond to authenticity come in a wide variety. What or who is the idea's source? What was the source's source? Is the idea consistent with other ideas? Is the idea firmly stated, or is it constantly changing shape whenever convenient? Are the expected outcomes of its realness affecting people's observations or activities? Does it seem incredible, and if so then is it backed by highly credible support to compensate? Would the idea be discarded if it failed, or is it suspiciously preserved no matter how much it fails? Is it ever even tested?

Once again the movie top is a loose metaphor for these confirming details. A top that doesn't quit isn't meeting the conditions for authentic objects similar to it. Of course, by not showing what the top does, part of the intent of the final movie scene is to ask the secondary question of whether people should care. I hope so. It's tougher and riskier to screen ideas for authenticity, but the long-term rewards are worth it.

Saturday, September 23, 2017

what compartments?

For good reason, compartmentalization is considered to be a typical description for how a lot of people think. This means that they have isolated compartments in their minds for storing the claims they accept. Each claim is strictly limited to that suitable area of relevance. Outside its area it has no effect on anything, and nothing outside its area has any effect on it. Imprisoning claims in closed areas frees them to be as opposite as night and day. None are forced to simply be thrown out, and none are allowed to inconveniently collide.

The cost is that defining and maintaining the fine distinctions can be exhausting sometimes. But the reward is a complex arrangement that can be thoroughly comprehended, productively discussed, and flexibly applied. By design it satisfies a variety of needs and situations. Not only are contradictions avoided within a given compartment, but there are also prepared excuses for the contradictions between compartments.

It's such a tidy scheme that it's tempting to assume that this form of compartmentalization is more common than it is. Career philosophers, theologians, scholars, and debaters probably excel at it. Yet I highly doubt that everybody else is always putting that much effort into achieving thoughtful organization, self-coherency, and valid albeit convoluted reasoning. It seems to me that many—including some who offer peculiar responses to surveys—don't necessarily bother having consistent "compartments" at all. The territories of their competing claims are more fluid.

Theirs are more like the areas (volumes? regions?) of water in a lake. The areas are hardly separate but are touching, pushing, exchanging contents, shrinking, growing. People in this frame of mind may confess that their ideas about what's accurate or not aren't anchored to anything solid. Their felt opinion is endlessly shifting back and forth like the tide. Their speculations bob around on the unpredictable currents of their obscure intuitions.

Even so, the areas can be told apart. The areas aren't on the same side of the lake or aren't the same distance from the shore. The analogy is that, if prodded, the believer may roughly identify the major contrasting areas they think about. The moment that their accounts start to waver is when the areas' edges and relative sizes are probed. Tough examples help to expose this tendency. Would they examine pivotal claim Q by the rules of A or B? Perhaps they're hesitant to absolutely choose either option because they sense that committing would imply a cascade of judgments toward topics connected to Q.

In effect, the boundaries they use aren't like compartment walls but like water-proof ropes that run through a spaced series of little floats. These ropes are positioned on the surface of the lake to mark important areas. Unlike the tight lanes in an indoor swim race, if they're tied too loosely they move around a bit. Similarly, although believers may be willing to lay out some wobbly dividing lines on top of their turbulent thoughts, their shallow classifications could be neither weighty nor stable. They may refuse to go into the details of how they reach decisions about borderline cases. They can't offer convincing rationales for their differing reactions to specific claims.

This fuzzy-headed depiction raises a natural question. They certainly can't have consciously constructed all this for themselves. So how did it develop? What leads to "informal compartmentalization" without genuine compartments? The likely answer is that ideas from various sources streamed in independently. Close ideas pooled together into lasting areas, which eventually became large enough to be a destination for any new ideas that were stirred in later. The believer was a basin that multiple ways of thinking drained into. As their surroundings inevitably changed, some external flows surged and some dried up. Whether because of changing popularity or something more substantial, in their eyes some of their peers and mentors rose in trustworthiness and some sank. Over time, the fresh and stagnant areas were doomed to crash inside them.

The overall outcome is like several selves. The self that's most strongly activated in response to one idea isn't the self that would be for some other idea. This kind of context-dependent switching is actually a foremost feature of the brain. Its network structure means that it can model a range of varying patterns of nerve firings, then recall the whole pattern that corresponds to a partial signal. It's built to host and exploit chaotic compartmentalization.

A recurring metaphor for this strategy is a voice vote taken in a big legislature. The diverse patterns etched into the brain call out like legislators when they're prompted. The vote that emerges the loudest wins. The result is essentially statistical, not a purely logical consequence of the input. The step of coming up with a sound justification could happen afterward...or never. The ingrained brain patterns are represented by the areas in the lake. Overlapping patterns, i.e. a split vote, are represented by the unsteady area boundaries.

The main lesson is that a many-sided viewpoint can be the product of passive confusion or willful vagueness, not mature subtlety. Unclear waters may be merely muddy, not deep. Arguing too strenuously with someone before they've been guided into a firm grasp on their own compartmentalization is a waste. It'd be like speaking to them in a language they don't know. One can't presume that they've ever tried to reconcile the beliefs they've idly picked up, or they've noticed the extent of the beliefs' conflicts. It might be more fruitful to first persuade them to take a complete, unflinching inventory of what they stand for and why. (Religious authorities would encourage this too. They'd prefer followers who know—and obey—what their religion "really" teaches.)