Tuesday, September 05, 2017

placebo providence

It might be counterintuitive, but I've found that in some ways the broad topic of religion abruptly became more interesting after I discarded mine. I stopped needing to be defensive about the superiority of my subset of religion and the worthlessness of others. Seeing all of them as mainly incorrect—albeit not all equally awful or decent—grants a new kind of objectivity and openness. Each item is a phenomenon to examine, rather than an inherent treasure or a threat. Additionally, the shift in viewpoint converted some formerly easy questions into more intriguing puzzles. One of these is "How can people sincerely claim that they've experienced specific benefits from their religion's phantoms?"

The answer "the phantoms exist" doesn't sound convincing to me now. But some alternatives are suggested by a longstanding concept from another context: the placebo. Although placebo pills or treatments don't include anything of medical relevance, recipients may report a resulting improvement in their well-being. Placebos famously illustrate that it's not too unusual for something fake to leave a favorable impression. The analogy almost writes itself. In terms of actual causes and effects, earnest belief in a generous spirit is superficially like earnest belief in a sugar pill.

Without further discussion, however, borrowing a complicated concept is a bit of a cheat. To do so is to gloss over too many underlying details. If something is said to work because it acts like a placebo, then...what is it acting like, exactly? The first possibility is that it's truly acting like nothing. As time goes on, people are continually affected by countless things, and the mixture of various factors churns and churns. So cycles occur depending on which things are dominant day by day. Good days follow bad days follow good days follow bad days. With or without the placebo, a good day might still have been coming soon. The good day was a subtle coincidence. This is why testing only once, on one subject, shouldn't be conclusive. Or the subject could've had an abnormally unpleasant day not long ago, and then they had a very average day. 

A second possibility of placebo activity is that the subjects' awareness of it cued them to spend extra effort seeking, noting, and recalling good signs, as well as brushing aside bad signs. It's like telling someone to look up and see a cloud that's shaped like a horse; they might have said the cloud looked like something else if they'd seen it first. Or it's like asking them whether they're sure that they didn't witness a particular detail in the incident that happened yesterday. Their expectations were raised, so perception and memory were skewed. This tendency works by indirectly coloring their reactions to stimuli. So of course it's applicable to subjective outcomes, i.e. just generally feeling better. As anyone would expect, placebos score consistently higher in medical trials for subjective outcomes such as temporary pain relief than in trials for objective outcomes such as shrinking tumors. 

On the other hand, placebos' subjective gains point to a valuable principle. When root causes don't have swift solutions, enhancing the quality of someone's experience of the symptoms is still both feasible and worthwhile. Regulating attention and maintaining a balanced perspective are excellent mitigation strategies. Deflecting consciousness in a productive direction is an ability that can be developed. If that's too ambitious, then at least indulging in positive distraction will help. Shrewd choices about what to minimize and what to magnify lead to a definite, often underrated boost in mood. And it doesn't require lies and placebos.

The last possibility of a placebo's inner workings is that it affects the subject's brain, and then the alteration in the subject's brain adjusts the behavior of other organs. Unfortunately, the amount of control by this route is frequently misunderstood. For example, mental pictures and verbal affirmations don't "will" the immune system into doubling its effectiveness (though an overactive immune system would be a problem too). Keenly wanting doesn't telekinetically rearrange the relevant particles to match.

Nevertheless, a few types of crude brain-body governance are undeniable. These are called...emotions. The body rapidly responds to calm, agitation, fear, aggression, sadness. It's stressed or not stressed. Regardless of the cause being fictional or not, large or small, sudden or ongoing, vivid or abstract, the effect is that comparable signals flow instinctively from the brain to the rest of the body. On the basis of stopping and/or starting the flow of disruptive signals at the source, a placebo's power to bring about a tangible change isn't a total mystery. It'd be more surprising if a substantial reduction in emotional chaos didn't have desirable consequences for the subject and their lives.

These possible explanations for placebos correspond to categories of why people are apparently satisfied by the interventions of their preferred incorporeal entities. The first corresponds to lucky timing. Circumstances were about to brighten with no intervention, so a nonexistent intervention was simply sufficient. The second corresponds to slanted judgment. The thought of the intervention prods the believer to fixate on the upsides and rationalize the downsides. They look harder because they presume that the intervention they believe in had to have done something. The third corresponds to the physical side effects of believing in the intervention. If holding to the belief makes the believer more centered, slower to jump into unhealthy and unwise decisions, and closer to a group of supportive believers, then its rewarding side effects for the believer are substitutes for the missing rewards of the unreliable content of the belief.

One final comment comes to mind. Of all the statements made about placebos, the most curious is the proposal to try to achieve the "placebo effect" in regular clinical practice. To make the prescriptions be ethically acceptable, the recipient is fully informed about what they're getting! This is like the question of why I didn't keep participating in my religion after I realized that it wasn't, um, accurate. The retort is that I lost my motivation to bother taking a pill that I knew was only a placebo.

Saturday, August 19, 2017

advanced friend construction

There's no shortage of unflattering, childlike comparisons to irritate the religiously devout. I know this from both my present position and also when I was on the receiving end. For example, they're caught up in incredible fairytales, or they're hopelessly dependent on the support of an ultimate parental figure, or they're too scared of social pressure to admit that the emperor has no clothes on.

But for today I'm interested in another: that they never outgrew having an imaginary friend. They're obligated to immediately deny this jab because, of course, their particular god isn't a human invention. But I wonder, only half-jokingly, if there's a second strong reason for them to feel offended by the comparison. It echoes the irritation of a forty-year-old when intense dedication to building intricate scale models is equated with an uncomplicated pastime for kids.

Simply put, they aren't casual, sloppy, or immature with what they're doing. They're grown adults who are seriously committed to performing excellent and substantial work, thank-you-very-much.

They most emphatically aren't in a brief juvenile phase. They apply great effort and subtlety to the task of maintaining their imaginary friend. (I should note once again that I'm narrowly describing the kind of religiosity that I've been around, not every kind there is.) They often thank it in response to good events. They often plead for help from it in response to bad events. They study sacred writings about it. They recite and analyze its characteristics. They develop an impression of its personality. They sing about its wonderfulness and accomplishments ("what a friend we have..."). They compare notes and testimonials with other people who say they're dear friends with it too. They sort out which ideas of theirs are actually messages sent from it. They apologize to it when they violate its rules. They attempt to decode its grand plan via the available scraps of clues.

The amount of toil might prompt outsiders to question why a non-imaginary being's existence is accompanied by such a demanding project. This reaction is sensible but fails to appreciate the great opportunity it presents for massive circular reasoning. Because a typical, straightforward imaginary friend doesn't present a large and many-sided challenge, the follower's endless striving indicates that theirs must not be in that category. Why would there be elaborate time-honored doctrines, requiring a sizable amount of education and debate, if theirs were just imaginary all along?

Furthermore, they may point to an additional huge difference that's much more perceptible to them than to an outsider looking in: theirs isn't solely a nice friend to fulfill their wishes and never disagree with them. Theirs is an autocrat of their thoughts and behavior ("lord"). It's far from qualifying as a friendly companion in some circumstances. It sees and judges everything. It forces them to carry out acts (good or bad) which they'd rather not. It dictates a downgraded view of them even as if dictates ceaseless adoration of itself.

All the while, to nonplussed observers they appear to be inflicting psychological self-harm. Or as if something unseen is invading them and turning their own emotions into weapons against themselves. An outrageous parallel of this startling arrangement is the pitiful Ventriloquist in the twisted "Read My Lips" episode of Batman: The Animated Series. He carries and operates a dummy named "Scarface", which looks and talks like an old-fashioned gangster boss. They treat each other like separate individuals. They don't seem to know each other's thoughts. Scarface habitually orders his tolerant underlings to speak to him, not to the mostly quiet and unmoving ventriloquist—whom he calls "the dummy". He's always in charge. He gets angry when the ventriloquist tries to contribute to conversation.

The utter absurdity is that the ventriloquist is the sole physical vehicle for the second personality Scarface, yet he's not immune from the paranoiac's hostility. His bully's existence would be totally impossible without his constant assistance. He's in confinement in his last scene, after Batman has prevailed and the dummy is demolished. And he's keeping himself busy carving a replacement...

I realize this parallel is dramatic in the extreme, although I'd note that the gap between it and some people's self-despising religious mentality is unfortunately smaller than it should be. Generally their creation isn't as uncaring or domineering as Scarface. But nor is it as tame and passive as an imaginary friend. For them, it blooms from out of the soil of their brain activity into a functioning character who exhibits specific qualities. It gathers up several sentiments in a coherent form. It's built from aspects of the self. It starts as a bare outline pencil sketch, then it's repeatedly redrawn and colored in section by section. It takes on a virtual life of its own. Over time, the person's brain manages to acquire a "feel for" the character. Thereafter, even without voluntary control, it can habitually compute the character's expected opinion. The character is an extra "brain program" that's loaded and ready, and like a memory it springs up whenever something activates it. The more single-minded someone is about the character, the greater number of triggers it will have.

The curious and thoroughly mockable catchphrase "What Would Jesus Do?" is one example of intentionally invoking an embellished character to answer a moral question. People similarly use these abilities when they infer that someone they know well would've enjoyed a joke or a song. This is simple empathy redirected in an abstract direction through the flexibility of human intelligence. These common abilities to simulate human or human-like characters—"Theory Of Mind" (ToM) is a well-known label—are the natural outcome of humans' evolutionary advantage of deeply complex social interaction. Noisy movie audiences confirm every day that characters certainly don't need to be nonfictional to be intuitively comprehended and then cheered or booed.

Sometimes, when people exit a religious tradition like the one I did, they may comment that they went through an emotional stage which resembled losing a friend. After spending years of advanced construction on the "friend" they lost, their level of broken attachment is genuine. For me personally, that happened to not be as problematic. I wasn't as...close with my imaginary friend as many are. So my puzzlement overtook my former respect early in the journey out. "Who are you, anyway? You don't make any sense anymore. I think I don't know you at all. Maybe I never did."

Saturday, July 15, 2017

upon further contemplation

I'm guessing this won't come as a shock: the sweeping advice that more faith would've prevented my unbelief fails to dazzle me. From my standpoint it seems equivalent to the bullheaded instruction, "You should have never let yourself revise what you think, no matter what well-grounded information came along, or how implausible or problematic your preferred idea was shown to be." I knowingly discarded the beliefs after open-eyed judgment. If more faith is seriously intended as a defense tactic, then it has a strong resemblance to the ostrich's. (The inaccuracy of the head-burying myth can be ignored for the sake of lighthearted analogy...)

I'm more entertained, but no more convinced, by specific recommendations that would've fortified my beliefs. Contemplative prayer's assorted techniques fit this category. These are said to improve the follower's soul through the aid of quietness, ritual, reflection, and focus. The soul is methodically opened up to unearthly influence. It's pushed to develop an engrossing portrayal of the supernatural realm. It's taught to frequently note and gather signs of this portrayal's existence. The edifying periods of intense concentration might be guided by spiritual mottoes, textual studies, mental images, dogmas. Intervals of fasting and solitude might be employed to heighten attentiveness. Presumably, all this effort goes toward two interlocking goals. First is an inspiring appreciation of God. Second is often having in-depth, warm, productive connections with God, at both scheduled and unscheduled times. Zealous contemplators like to declare that they're "in a relationship, not a religion" and that they walk and talk with God.

Nevertheless, I wouldn't rashly accuse them of telling deliberate lies about the phenomena their techniques lead to. Aside from the embellishment and reinterpretation that inevitably slip in, I don't assume that they're fabricating their entire reports. Dreams aren't perceptible outside of the dreamer's brain either, but that doesn't imply that no dreaming occurred. When they say they sense God, I'm willing to accept that their experience of sensing was triggered in them somehow. If an experience roughly corresponds to the activation of a brain region, then purposely activating the region could recall the experience. Anywhere in the world, a whiff of favorite food can conjure a memory of home.

The actual gap is between the meaning that they attribute to their contemplative techniques and the meaning that I attribute. They claim that they're harnessing the custom-made age-old wisdom of their particular tradition to come into contact with their unique God. But when I reexamine their techniques in a greater context, I can't avoid noticing the many close similarities with sophisticated psychological training. I'm referring to training by the broadest nonjudgmental definition. We're social creatures who have highly flexible brains. We're training each other and ourselves, by large and small degrees, constantly though not always consciously, for a host of admirable or despicable reasons. Where they perceive specialized paths to divinity, I perceive the unexceptional shaping of patterns of behavior and thinking.

No matter the topic, a complicated abstraction is usually a challenge for psychological training. Extra care is needed to ensure that it's memorable, understood, relevant, and stimulating. A number of ordinary exercises and factors can help. Undisturbed repetition is foremost. Obviously, over the short term it stops the abstraction from promptly fading or being pushed out by distractions. But for the knowledge to persist, undisturbed repetition shouldn't be crushed into a single huge session. It should be broken up into several, preferably with evenly spaced time in-between. Each should build on top of the previous. Old items should be reviewed before new items. It also helps when the material is itself put in a repetitive and thoughtful form, in which parts of the new items are reminiscent of parts of the old items. Mnemonics, rhymes, and alliteration have benefits other than stylistic flourishes.

Better still is to supplement undisturbed repetition with active processing. Asking and answering questions about the abstraction forces it to come alive and be comprehended. The questions should be decisive and piercing, not vague, superficial, and easy. The aim is greater clarity. A clear abstraction appears surer and realer than a hazy one. Its familiarity increases as it's meditated on and reused. A secondary effect of active processing is to establish its links to other ideas. Its distinguishing characteristics are exposed. Its boundaries are drawn. It ceases to be a mysterious, remote, solitary blob. Instead it's nestled firmly in its known position by neighboring ideas: it's a bit like this and a bit unlike that.

If possible, the active processing should include personalizing the abstraction. A person may or may not be permitted to adapt it to suit themselves. But in either case, they can translate it into their own words and the symbols they find significant. And they can try to pinpoint informative overlaps between it and their larger perspective. Applying it to a their vital concerns instantly raises its value in their thoughts. Lastly, to the extent that it influences their individual choices, it accumulates a kind of undeniable role in their personal history from then on.

Personalizing an abstraction works because brains have an innate talent for pouncing on information that affects the self. Stories and sense perception are two more brain talents that can be successfully targeted. The brain already has skills for absorbing concrete narratives and sensations. A compelling story is superior at conveying the qualities of someone or something. Visualizing something abstract aids in delivering it into consciousness, regardless of whether the visualization is merely a temporary metaphor. Paradoxical as it sounds, attaching many little sensory details can sometimes be beneficial for retention. Vividness enables an abstraction to grab and hold a bigger slice of awareness. Excessively minimal or dull descriptions engage less of the brain. Although a concise summary is quicker to communicate than a series of differing examples, the series invokes sustained attention. The multiple examples present multiple chances, using several variations, to make at least one enduring impression.

For mostly the same reason, adding a factor of emotion works too: it's a "language" which is built into the brain. It marks information as important. It boosts alertness toward an abstraction. Meanwhile, the flow of associations pushes an understanding of its parts. The parts to be opposed—problems or enemies—are enclosed by a repelling frame. The parts to be welcomed—solutions or allies—are enclosed by an appealing frame. A thorough bond between emotion and an abstraction can last and last. Its potency could potentially rival or exceed the potency of a bond to a tangible object. Such objects can be hobbled by the fatal shortcomings of realistic weaknesses and complex mixes of advantages and disadvantages, which bind to conflicting emotions.

It so happens that all these considerations feature prominently in the contemplative techniques that would've hypothetically sheltered me from unbelief. That's why I conceded earlier that diligent practice of the techniques probably does fulfill the promise...according to the contemplator. When psychological training is carried out well, I'd expect it to be effective at introducing and reinforcing craftily constructed abstractions. The end results are that numerous stimuli spontaneously give rise to the cultivated ideas. The ideas become the lenses for observing everything else. Dislodging them to make room for contrary thoughts starts to feel, um, unthinkable. Contemplators see themselves producing subtler insight into the being that created them and provided them an Earth to live on. People like me see them producing subtler refinements of the being they're continuously creating and for whom they've provided a brain to "live" in.

However, contemplation is doomed to be a flawed source of proof because it has no essential differences from the "more faith" remedy I first criticized. It often functions independently of tested realities outside the brain's. When it's relying on imaginative modes, it operates separately from rigorous argumentation, pro or con. If I'd been more accomplished at it, would my escape have been longer and wobblier? I suppose. Yet I question that I could've fended it off forever.

Friday, June 23, 2017

environmental contamination

When people discard their beliefs about the supernatural, they pose a troubling but inescapable question to those they left behind: why? What prompts someone to alter their allegiances so drastically, after ingesting the One Truth for years and years? They opt to comfort themselves with a medley of convenient explanations. For instance, similar to their obsessions with "purity" in other domains, they can suggest that the apostate's thinking was contaminated. Wicked lies must have poisoned their beliefs, which were originally perfect and intact. If the crafty sabotage had been resisted, the beliefs would've been preserved indefinitely.

In rigid situations, this suggestion really could succeed. Like an impermeable museum display box enclosing an ancient artifact, total isolation does prevent or slow changes to hardened opinions. This is exactly why stricter groups tightly restrict their members' access to outside information in full or in part. The separation is sometimes enforced through an explicit set of rules, sometimes through the social pressure of the group's treatment of the offender.

The obvious weakness with this practice is that it must be extreme, or else contamination will creep in somehow at some time. If the follower is determined to circumvent the barriers, and they're not under constant surveillance and confinement, it will probably fail sooner or later. But if the follower opts for the opposite reaction of internalizing the barriers, the risk of "contamination" drops to near nil. They resolve to act as their own sentinel, eagerly watching out for and squashing potential threats to the beliefs they follow.

When I was younger, I was more like the willing participant than the rebel. I didn't want to consume media that was openly antagonistic to the core beliefs I had. I knew such media would've upset me, so it didn't feel like an appealing use of my time. And in that period I categorized myself as an unshakable follower; I wasn't especially worried about wading into unending philosophical wars. I hadn't dissected my assumptions enough yet. The most potent issue of all, the problem of evil, wasn't urgent to me yet because, er, things in my surrounding egocentric awareness were largely pleasant.

Surprisingly (...or not...), the contamination of my thoughts happened anyway. I didn't need to search far and long for alternatives. As it turned out, these were lurking in my environment. Standard biological evolution is one example of a subject that I eventually stumbled on without trying. I didn't bother to read a lot about it or biology or creationism or Earth's age. The religious groups I was in didn't heavily emphasize the importance of rejecting it, although some individuals, such as the parents who home-schooled, did enthusiastically inject it into discussions. I thought of it as "controversial" among believers—like disagreeing about Bible editions—so perhaps I was cynical that a closer look would give me a firm, worthwhile answer.

My neutrality shifted in high school after someone lent me their copy of the slyly written Darwin's Black Box. It presented "intelligent design" through a breezy mixture of easy-to-read prose, arguments built on commonplace analogies like mousetraps, and technicalities drawn from biochemistry. It provided lengthy chemical names. But like a "For Dummies" book, the major points didn't require deep understanding. In comparison with past creationist works, its official position was "moderate": its single-minded focus was on the alleged infeasibility of gradually evolving microscopic cellular systems, rather than on completely refuting all evolution. Moreover, it underlined its attempt at moderation by conspicuously declining to offer a name for the off-stage Designing Intelligence. No quotations from sacred texts were included. It didn't ask for agreement with dogma. Like typical science books sold to the wide market, it was hopefully aimed at anyone with a casual interest. Sections nearer to the end spelled out the ostensible goal, which wasn't to justify a belief in the author's preferred god. It was to achieve the status of legitimate counterbalancing science. 

After I returned the book, I mostly didn't think about it. I did note that neither it nor its intelligent design ideology were in major news outlets or publications, except in quotes by public figures such as George W. Bush. Biology certainly hadn't been triumphantly upended. In a few years I had perpetual access to broadband internet at college, so one lazy day I remembered it and performed a spontaneous internet search. I discovered that the reasoning of Darwin's Black Box had been speedily dismantled right when it came out. Its list of biochemical challenges was countered by plausible evolutionary pathways. After observing its low reputation in the eyes of the majority of specialists, my previous naive trust in it sank. Of course, if I hadn't read it, maybe I wouldn't have been motivated to browse evolution-favoring websites in the first place.

This wasn't the last time that the one-sided clash of evolution and intelligent design sprang from my environment into my thoughts. Kitzmiller v. Dover came along. It was a trial about inserting references to intelligent design into the curriculum of public school science classes. The author of Darwin's Black Box was one of the witnesses. His ridiculed answers were disastrous for his cause. Although a courtroom isn't the appropriate setting for scientific judgments, the verdict was unequivocal and impressive. Intelligent design wasn't suitable for science class in public school. My vaguely noncommittal attitude turned strongly in favor of evolution. To repeat, I already knew there were believes like I who admitted evolution's accuracy, so this adjustment didn't push me to reconsider everything.

Anyway, biology and geology weren't my usual subjects when I was at the library or the bookstore. I was intrigued by physics and psychology. Nevertheless, these areas transmitted contaminants too. I was nonchalantly skipping books about atheism, but I was reading books that relayed information in the absence of religious slants or premises. I learned that up-to-date physics was amazingly engaging, extensive, and confirmed. But unlike religion, its various findings didn't support the story that anything related to humans, or specifically human-like, was central to the "purpose" or the functioning of the cosmos. In big scales and small, human concerns and abilities weren't essential. Despite their supreme cleverness, they were one species on one planet. Fundamentally they were derivative. They were built out of atomic ingredients and dependent on numerous strategies to redirect entropy for a while.

I absorbed the implicit style of mostly leaving ghostly stuff out of normal physics phenomena. I just assumed that the divine and demonic realms existed apart and parallel in some undetectable, undefined way. The realms' understated interventions on the mundane plane were generally compatible with physics—recovering from an infection or obtaining a new job—except on rare occasions such as starting the universe or resurrecting. In short, the arrangement I settled on was a popular one: more physics-plus than anti-physics. My thoughts were contaminated by an acknowledgment of physics' effectiveness at comprehending things. The symptom of this contamination was that by default I inferred particles and forces at work everywhere, not spirits.

As for psychology, religion's role was more prominent. The trouble was that its role was so frequently described as harmful. It could be tied in with a patient's delusions of paranoia or megalomania, or lead to anxious guilt, or shortsightedly stifle the real root causes of distress. Some thinkers labeled it a sophisticated manifestation of...an infantile coping mechanism. I took some offense at that, though I did take the hint to ensure my beliefs about the supernatural weren't reducible to direct gratification of my emotional needs.

One memory that's grown funnier to me is my head-spinning encounter with the book that grabbed me with its overwrought title, The Origin of Consciousness in the Breakdown of the Bicameral Mind. It explained the Bible as reports of ancient auditory hallucinations. I wasn't nearly ready to take its argument seriously, so its raw effect on me was more emotional in nature than intellectual. I was engulfed in the initial shock that this kind of bold speculation dared to exist. It was inviting me to demote the Bible to a mythological text and the voice of God to a trick of the brain. I hadn't faced these irreverent options so bluntly before. It was like an "out-of-belief experience", temporarily floating out and above the ideas and looking down at them like any collection of cultural artifacts. My faith stayed where it was, but I didn't forget the dizzying sensation of questioning all of it.

I don't want to give the wrong impression about packing my leisure time with education. I read lots of fiction too. Yet it wasn't an environment free from contamination either. When I looked up more of the works by two authors of fiction I'd enjoyed, Isaac Asimov and Douglas Adams, I collided with more atheists again. I read Asimov's "The Reagan Doctrine" online. It was short, but it was so self-possessed and methodical in the facts it applied to break apart the false equivalence of religiosity and moral trustworthiness.

After Adams' death, I bought The Salmon of Doubt without skimming through it. I was anticipating the unfinished Dirk Gently portion. I hadn't known what else was in it. It contained a number of Adams' non-fiction essays and interviews, and several of these featured his strikingly unapologetic atheism. For example, he created the metaphor of an ignorant puddle declaring that its perfectly fitting hole must have been created for itself. I hadn't purchased a book centered around atheism, but nonetheless I had purchased a book with passages that were cheerily atheistic. In the environment of fiction I'd been contaminated by the the realization that there were people who were convinced that I'd been gravely misled all my life...and who had written stuff I liked...and who seemed good-humored and reasonable. They weren't any more detestable or foolish than us followers.

This realization didn't spin me around 180 degrees. Nothing ever did. My reversal was the result of a fitful sequence of little (20-degree?) turns. However, there were a few sources of contamination that immediately preceded my breakthrough: artificial intelligence, information theory, and cognitive science. Like the rest of the contaminants, these didn't develop overnight but started out as innocent-looking seeds. The earliest crucial one was Jeremy Campbell's The Improbable Machine. I was a teen when I picked up this book on impulse (the store had steeply cut its price). It was a wide-ranging exploration of the theme of connectionism: artificial intelligence by mimicry of the brain's evolved layout of legions of tiny interconnected units acting cooperatively. According to it, the opposite extreme was the route of translating all the brain's abilities into orderly chains of logic.

Before, I'd been accustomed to assigning artificial intelligence to the narrow project of constructing bigger and bigger logic machines—the fragile sort that Captain Kirk could drive insane with a handful of contradictory statements. Campbell's thesis was that connectionism was a promising model for the brain's more soulful traditional powers: intuitive leaps, creativity, perception, interpretation of ambiguous language, etc. I was accidentally contaminated by the faint suspicion that souls, i.e. nonphysical spirits/minds, weren't strictly necessary for doing whatever the brain does. I began to imagine that the puzzle was only difficult and complex, not doomed to failure by mind-matter dualism. Tracing the brain's powers to its very structure had the long-term side-effect of casting doubt on blaming something inhabiting the structure.

A decade later, I returned to these topics. My career in software was underway. I regularly visited blogs by other software developers. The recommendation to try Gödel, Escher, Bach showed up more than once. I ignored it for a long time because of my preconceptions. When I finally gave it a chance, Hofstadter's compelling effort stirred my curiosity. I moved on to devouring more of his, and I also checked for more of Campbell's. This time I sped through Grammatical Man, which ignited a prolonged fascination with information theory. I consumed more on this subject. And I paired it with cognitive science, because I wanted to know more about the brain's dazzling usage of information. Amazon's automatic recommendations were helpful. Some books probing consciousness concentrated more on anatomy and some more on philosophical dilemmas. My first Daniel Dennett buy wasn't Breaking The Spell, it was Consciousness Explained.

The accelerating consequences were unstoppable. My desire had been to read about how people think, but the details were often contaminated with well-constructed criticisms of the whole principle of the soul. I'd once been confident that the mystery of inner experiences was invincible. It was a gap that my faith could keep occupying even when all else could be accommodated by nonreligious accounts. Instead, this gap was filled in by the increasingly likely possibility that inner experiences were essentially brain actions.

For me, the scales were tipped. The debate was decided. All the signs of an immaterial layer of reality had been exposed as either absent, illogical, fraudulent or illusory, or at best ambiguously unconvincing. I recognized that I could continue believing...but if I did it would be with the shortcoming that the unsatisfying "truth" of the beliefs exerted no distinguishable difference on reality. If I'd then been familiar with Carl Sagan, I could've compared the beliefs' contents to his famous illustration of an invisible, silent, incorporeal dragon in the garage.

I made a slow, deliberate choice between two sides. Contrary to the contamination excuse, interfering outsiders weren't responsible for misleading me. I wasn't playing with fire, either intentionally or not. I didn't hastily abandon my beliefs as soon as I became aware of another stance. I wasn't an impressionable simpleton who thoughtlessly trusted the internet. Enlarging my knowledge didn't forcibly corrupt or "turn" me. The hazard was never in merely seeing the other side. It was in paying close attention to what each side used to validate itself. The pivotal contamination was learning that the other side pointed to data, mathematics, self-consistency, appreciation of common flaws in human cognition, and prudent restraint before relying on unverified beliefs. But as for the side I'd been taught...

Monday, May 29, 2017

follies of ethnocentrism

More and more as of late, I've noticed that commentary about right-wing American evangelicals has been asserting that their racist leanings are beyond question. I realize that there is excellent past and present evidence that supports this generalization. I agree that, for a substantial portion of them, it's applicable.

However, in the portion who I know through personal experience—which I confess only represents a subgroup of a subgroup—the prejudice I detect is a smidge more complex...or at least flows along more complex channels. For lack of a better label, I'll use "the respectables" to refer to this set. (Yes, I'm glad to report that I'm familiar with some whose warmhearted behaviors and outlooks are unobjectionable. I'm not referring to them.) The respectables are shocked by accusations of racism. After all, they never suggest that race is at the innermost core truth of someone. It's not a biological limit on abilities or personality. It isn't destiny.

Part of the reason why is because the respectables treasure the sentiment that all people are targets for the duty of universal evangelism. Attendees of any race are admitted to public events. In large gatherings, some of the regular goers are likely to be in the so-called "diverse" category. Officially, all people-groups domestic and foreign need to be afforded every chance to convert. Adventurous stories of evangelism in faraway nations are more praiseworthy, not less, when the setting is "exotic", and that goes for the exoticism of the setting's races. Although the pompous and intrusive quest to Christianize all humanity is nauseating, it certainly undermines the very idea of using race alone to disqualify people.

So the respectables aren't hostile toward races in theory. They don't believe (or believe but refrain from saying?) that a race is genetically inferior or superior. Adopting a child of another race is accepted, as is marrying a spouse of another race. Their anxieties have oozed in a less obvious direction. In the most general terms, they're dismissive and fearful of dissimilar cultures. They rank each one according to their estimate of its deviation from their own. Whatever else they're disciples of, they're disciples of ethnocentrism.

Its effects are less vicious than raw racism but unfortunately are tougher to discern and exterminate. It might not motivate angry attacks, but it'll motivate avoidance, or just chilly distance. The barrier it creates isn't necessarily impossible to cross, but in any case it's significant. Individuals face the extra burden of first overturning the immediate verdicts that were assigned to them. They aren't utterly excluded, but they have...unsettling questions surrounding them. In the race for social status, they begin from behind the starting line.

Like any human behavior, the kind of apprehensive ethnocentrism I'm trying to describe doesn't stay neatly contained in the boundaries of its dictionary definition. It's a frequent teammate of the racism of thinking that a race is synonymous with a culture. With this link, the outcome of stigmatizing the culture becomes, well, synonymous with stigmatizing the race. The difference is negated.

Nevertheless, at least race isn't a unique cultural sign. Ethnocentrism's domain is broader than race, because culture itself has many details occurring in endless varieties. The list of additional cultural signs to exploit includes clothing, hair/skin styling, etiquette, economic class, language, slang, accent, geographic region, religious adornment, occupation, food/music preferences. To the ethnocentric person, race as well as any of these may end up functioning as clues of reassurance or dread about the culture that controls the stranger. Because they have rationales about why theirs is the best, they choose which signs matter the most to them and which cultures are better or worse approximations of theirs. A stranger who displays enough signs can be successfully pigeonholed as a member of an admired culture, despite showing some signs of a scorned culture too.

Yet, ethnocentrism's potential to take several observations into account at once cannot manage to compensate for its unfair perceptions. Usually it's already committing a pivotal error: it's really sorting among slanted and incomplete abstractions (impressions, clichés) of cultures. This is to be expected. A vast culture, with a long history and a wide footprint, has numerous types and subsections and components, and upsides and downsides of each. It cannot fit as well into ethnocentrism's coarse rankings of worthiness unless it's drastically pared, flattened, summarized, frozen in time, severed from outside influences. A largely uninformed collection of selected fragments is hammered into a convenient lens. And the distorted lens is used to impatiently classify anyone who has (or seems to have) some of the signs. The problem with this result is predictable. Regardless of the culture's shared elements, it probably accommodates a host of almost opposite opinions on a host of topics. There's no visible hint to distinguish where the stranger falls in this range.

Furthermore, patchy awareness of the culture could be magnified by patchy recognition of the various levels of influence that cultures have. In order to believe that the culture the person supposedly signifies can sufficiently explain them, their capacity to make their own decisions needs to be disregarded. Again there's a spectrum. Some are devotees who fully embrace it. Some opt to be nominal: primarily detach their identities from it. And some are selective in what they absorb or ignore, and these selections can change over time. Depending on their environment, they could simultaneously be selecting from other cultures, even if they're overlooking the logical incompatibility of the mixtures. Or to some degree they could be declining a preexisting path, instead inventing special ideas and novel ways of life. The point is that perhaps the majority of their choices are dictated by a culture, but that can only be speculation until their personal stances are known.

The pitfalls of pursuing ethnocentrism don't end there. Its approach is characterized by warily eying culture mostly from the outside, i.e. not by talking to the participants. It should be no surprise that it's prone to misinterpreting the actual practice and meaning of the contents. The importance of context shouldn't be underestimated. Statements might not be serious or literal. Symbols might have obscure, even ironic, meanings. Problematic items might be rationalized or downplayed. To add to the confusion, the pronouncements published by "authoritative" organizations often diverge from what many of the participants genuinely think. The area of interest should be how the culture is lived, not on naive, superficial analyses of its minutiae. If everyone within it views a puzzling story as a mere exaggeration for dramatic effect, then highlighting the story's outlandishness accomplishes nil. An external critic's disagreement about what is and isn't a figure of speech isn't pertinent to them.

In combination, these considerations justify being initially unmoved by the declaration, "I'm not a deplorable racist—I'm a respectable citizen who's concerned about the cultural mismatches between us and them". Clearly, placing focus on "culture" nonetheless provides an abundance of chances to maintain one-sided ideas about massive numbers of human beings, hastily erase all fine distinctions, and typecast the undeserving. The possible consequence is another pile of simplistic attitudes which are barely an improvement.

Cerebral followers, who've learned their lines well, can swiftly repeat the customary retort to remarks such as these: the horrifying spectre of moral relativism. Namely, they assert that people with positions like mine are obligated to say that the morality of every piece of every culture cannot be judged consistently. But I'm not that extreme (or that hypocritical). I cheer well-done critique, if its aim is narrow and precise. And, as much as possible, I prefer that its aim isn't excessively pointed toward, or conspicuously avoiding pointing toward, any singular group of cultures. Thoughtfully learning then delicately responding to an issue isn't the same as sweeping demonization of an entire way of life or of the people who identify with it. When disputing a specific item, I want to hear an explanation of it violating a nonnegotiable ethical principle, not its level of contradiction with sacred tenets or with some alternative's "common sense". Cultures, like other human creations, have sinister corners and blind spots that thoroughly earn our distaste. But we can extend the courtesy of not presuming that sinister appearances are always correct and of reflecting on whether a startling difference is trivial or shallow rather than perverse.

Sometimes this is easy to decide and sometimes not. If it were never complicated for the people deciding, I'd wonder if they're paying enough attention to the whole picture...or if, like an ethnocentrist, they make up their minds beforehand anyway.

Sunday, May 14, 2017

a question of substance

When one group in the habit of ridiculing an opposing group's beliefs, the easily attacked topics become customary. Mentioning them is so commonplace that no additional explanation is necessary anymore. They typically act as familiar, comforting reference points to casually toss in with other remarks. "Well, we shouldn't be shocked by this, after all. Don't ever forget that the mixed-up people we're talking about also believe _____."

For example, the people on the other "side"—I mean those who still follow the set of beliefs that I scrapped—often parrot the superficial story that a lack of sound religious commitment forces the lack of sound ethical commitments. Their false presumption is that ethics are always shaky and individualized apart from systematized religion's supposed timelessness and objectivity. They imagine that people without religion can't have steady principles to work with. Unbelievers' rootless ethics are to blame for every "incorrect" view and behavior. Their morality is said to be hopelessly deficient because they're inventing right and wrong however they wish.

At one time, I would've glibly agreed that this prejudicial story is self-evident. Needless to say, now I object to virtually every aspect of it, from start to finish. I've become part of the group it stigmatizes and seen for myself that it's wrong about us. Fortunately, we can console ourselves with the numerous targets that religious beliefs richly offer us in return. In my setting, usually these take the form of peculiar Christian myths and doctrines. Transubstantiation certainly fits that demand. It's the doctrine that a ritual can replace the substance of food and drink with the "sacred presence" of Christ. Its plain definition is enough on its own to stand out as bizarre and questionable. Quoting the belief of literally eating the real substance of a god suffices as an open invitation for biting commentary.

Simply put, it presents endless possibilities for wisecracks. Let me emphasize that that's mostly fine with me. I'm not broadly opposed to jokes about an idea...especially the rare jokes which manage to be funny and original. Calling attention to an idea's absurdity shouldn't be confused with "insulting" the idea's followers. Too many nations have created oppressive laws through that confusion. Though, at the same time, the more that a joke strays off topic and verges on outright jeering at people, the less I like it. And back in my former days, the more likely I would've been to briskly skip over the joke's message altogether.

My quibble is something else: the humorous treatment of transubstantiation risks an underappreciation of its twisted philosophical side. When a critic shallowly but understandably chuckles that after transubstantiation the items are visibly no different than before, they're not responding to the doctrine's convoluted trickery. According to its premises, the change wouldn't be expected to be detectable anyway. Everything detectable is dismissed as an attribute (some writings use the word "accident" as a synonym of attribute). But the ritual solely replaces the substance.

The distinction between attribute and substance is strange to us because it's a fossil from old debates. These debates' original purpose was to analyze the relationships among the multiple parts of conscious experience. If someone senses one of their house's interior walls, they may see the color of its paint, feel its texture, measure its height, and so on and so on. Ask them twenty questions about it, and the responses build a long list of attributes. After the wall is repainted or a nail is driven into it to hang a picture, then a few of the wall's attributes have changed. But the wall is still a wall. The substance of what it is hasn't changed. After a demolition crew has blasted away at a brick wall, and left behind a chunk, the chunk is still a wall; it's a wall with smaller attributes. In this scheme, a thing's substance is more like an indirect quality, while direct observations of the thing yield its attributes. The attributes are the mask worn by the thing's substance, and the mask has no holes.

The transubstantiation doctrine reapplies such hairsplitting to justify itself. It proposes that the items' attribute side is kept as-is and Christ's presence is on the items' substance side. By a regularly scheduled miracle, the presence looks like the items, tastes like the items, etc. It's subtler than transformation. How exactly is the process said to occur? The answer is mystery, faith, ineffability, magic, or whatever alternative term or combination of terms is preferred. The doctrine asserts something extraordinary, but then again so do official doctrines about virgin births, 3 gods in 1, eternal souls. Merely saying that it violates common sense isn't enough; common sense can be faulty. And merely highlighting its silly logical implications doesn't address its base flaws.

I think it's a fruitful exercise to articulate why the core of it, the split between attribute and substance, isn't plausible. The first reason is probably uncontroversial to everybody whose beliefs are consistent with materialistic naturalism: human knowledge has progressed in the meantime. We can catalog an extensive set of a thing's innate "attributes" through reliable methods. The discoveries have led to the standpoint of understanding a thing through its attributes of chemical composition, i.e. the mixture of molecules of matter within it and the molecules' states. This standpoint is deservedly applauded for its wide effectiveness because, as far as anyone has convincingly shown, human-scale attributes derive from these composition attributes. (Emergence is a strikingly complex form of derivation. Its turbulent results are collectively derived from the intricate combinations of many varied interactions in a large population.)

Asking another twenty questions to gather more attributes isn't necessary. Ultimately, the composition attributes are exhaustive. Removing or modifying these wouldn't leave untouched a remaining hypothetical "substance" of some kind. These have eliminated the gap in explanation that substance was filling in. These aren't on the surface like the attributes obtained by crudely examining a thing's appearance. The suggestion that all factual examination only goes as deep as a thing's outside shell of attributes stops sounding reasonable when modern examination is fully sophisticated and penetrating.

The second reason why the split between attribute and substance isn't plausible is more debatable, although I sure restate it a lot here: the meanings of thoughts should be yoked with actions and realities (outcomes of actions). The connections might be thinly stretched and/or roundabout. At the moment the actions might only be projected by the corresponding thought(s), but if so then there are unambiguous predictions of the real outcomes once (or if) the projected actions take place. The actions might be transformations of thoughts by the brain: recognition, generalization, deduction, translation, calculation, estimation. Under this rule, thoughts of either a thing's attributes or of its substance could mean less than initially assumed.

Attributes are marked by tight association with particular actions considered in isolation. Wall color is associated with the action of eyeing the wall while it's well-lit or capturing an image with a device that has a sufficient gamut. A wall dimension is associated with the action of laying a tape measure along it from end to end, for instance. Substance is marked by the inexact action of classifying things, perhaps for the goal of communicating successfully about them using apt words. It's an abstraction of a cluster of related characteristics. For a wall, a few of these characteristics are shape, i.e. length and height longer than depth, and function, i.e. enclosing an interior space from an exterior space. When the thing matches "enough" of the cluster, the average person would lean toward classifying it as a wall.

The observer is the one who decides whether to treat a characteristic as a flexible attribute or a member in the abstract cluster of a given thing's "essential substance". This is in line with the composition standpoint, which conveys that particles are indifferent to the more convenient labels people use for larger formations. There isn't anything embedded in each particle that evokes one of these categories over the other. The composition standpoint asserts that particles of the same kind are freely interchangeable if the particles' relevant physical properties are alike.

Indeed, particle interchangeability happens to be doubly significant because, as we all know, things deteriorate. A thing's owners may choose to repair or replace its degraded pieces. When they do, they've removed, added, and altered a multitude of its particles. Yet they may willingly declare that the thing is "the same" thing that it was before, just marginally better. In other words, the substance of it hasn't been lost. Like the famous Ship of Theseus, cycles of restoration could eventually eliminate most of the thing's original particles—which may be presently buried in landfills or turned to ashes by fire. Meanwhile biological contexts withstand continual flows of particles too, as cells die, multiply, adapt. If the action of declaring a thing's substance "unchanged" continues on despite its shifting composition, then the meaning of its substance apparently isn't even bound to the thing's matter itself. Part of the substance has to be a subjective construction.

Nevertheless, the consequence isn't that the entire thought of substance must be discarded as utterly false or fake. The coexistence of objective and subjective parts underpins a host of useful thoughts—such as self-identity. Rather, the need is to remember all the parts of the meaning of substance, to avoid the mistake of interpreting it as if it's an independent quantity or quality. Such a mistake could possibly feed the curious conjecture that a wielder of uncanny powers can seamlessly substitute these independently-existing substances upon request...

Sunday, April 16, 2017

fudge-topped double fudge dipped in fudge

Two truisms to start with. First, finite creatures like us don't have the ability to swiftly uncover all the details of each large/complex thing or situation. However, we still need to work with such things in order to do all sorts of necessary tasks. Second, we may combat our lack of exhaustive knowledge about single cases by extracting and documenting patterns from many cases and carefully fashioning the patterns into reusable models.

I'm using "model" in a philosophically broad sense: it's an abstracted symbolic representation which is suitable for specifying or explaining or predicting. It's a sturdy set of concepts to aid the work of manipulating the information of an object. This work is done by things such as human brains or computing machines. The model feeds into them, or it guides them, or it's generated by them. It reflects a particular approach to envisioning, summarizing, and interpreting its object. Endless forms of models show up in numerous fields. Sometimes a thoroughly intangible model can nonetheless be more tangible and understandable than its "object" (a raw scatter plot?). 

Some models are sketchy or preliminary, and some are refined. Some seem almost indistinguishable from the represented object, and some seem surprising and obscure. A model might include mathematical expressions. It might include clear-cut declarations about the object's characteristic trends, causes, factors, etc. The most prestigious theories of settled science are the models that merit the most credence. But many models are adequate without reaching that rare tier. Whether a model is informal or not, its construction is often slow and painstaking; it comes together by logically analyzing all the information that can be gathered. It's unambiguous, though it might be overturned or superseded eventually. It's fit to be communicated and taught. Chances are, other models of high quality were the bedrock for it; if not, then at minimum these others aren't contradictions of it. It's double-checked and cross-checked. Its sources are identifiable.

Despite the toil that goes into it, the fact is that a typical model is probably incomplete. Comprehensive, decisive data and brilliant insights could be in short supply. Or portions of the model could be intentionally left incomplete/approximate on behalf of practical simplicity. When relevant features vary widely and chaotically from case to case, inflexible models would be expected to perfectly match no case besides the "average". Perhaps it's even possible to model the model's shortcomings, e.g. take a produced number and increase it by 15%.

For whatever reason, the model and some cases will differ to some extent...but improving the model to completely eliminate the individual differences would be infeasible. So the disparity from the model is fudged. I'm using this verb as broadly as I'm using "model". It's whenever the person applying the model decides on an ad hoc adjustment through their preferred vague mix of miscellaneous estimations, hunches, heuristics, and bendable guidelines. Hopefully they've acquired an effective mix beforehand through trial and error. (The result of fudging may be referred to as "a fudge".)

If a realm of understanding is said to be an art as much as a science, then the model is the science and the fudging is the art. In the supposed contrast of theory with practice, the model is the theory and the fudging is one part of the practice. The model is a purer ideal and the fudging is a messier manifestation of the collision with the complexity of circumstance. The model is generally transparent and the fudging is generally opaque. Importantly, a person's model is significantly easier for a second person to recreate than their fudging.

The crux is that a bit of a fudge, especially if everyone is candid about it, is neither unusual nor a serious problem. But overindulgence rapidly becomes a cause for concern. The styles of thinking that characterize normal fudging are acceptable as humble supplements to models but not as substitutes. One possible issue is reliance on implicit, unchecked assumptions. Models embody assumptions too, but model assumptions undergo frank sifting as the model is ruthlessly juxtaposed with real cases of its object.

Another issue is the temptation to cheat: work backwards every time and keep quiet about the appalling inconsistencies among the explanations offered. Someone's slippery claim of fudging their way to the exact answer in case A through simple (in retrospect) steps 1, 2, and 3 should lose credibility after they proceed to claim that they fudged their way to the altogether different exact answer in case B through simple (again, in retrospect) alternative steps 1', 2', and 3'. A model would impose a requirement of self-coherency—or impose the inescapable confession that today's model clashes with yesterday's, and the models' underlying "principles" have been whims.

Yet another issue is the natural predisposition to forget and gloss over the cases whenever fudging was mistaken. The challenge is that new memories attach through, then later are reactivated through, interconnections with prior ideas. But a mismatch implies that the fudging and the case don't have the interconnections. The outcome is that a lasting memory of the mismatch won't form naturally. The primal urge to efficiently seek out relations between ideas comes with the side effect of not expending memory to note the dull occurrence of ideas not having relations. After being picked up, unrecorded misses automatically drift out of memory, like the background noise of an average day. A rigid model counteracts this hole because it enforces conscious extra effort.

The issues with replacing modeling with too many fudges don't stop there. A sizable additional subcategory stems from the customary risk of fudging: judging between hypothetical realities by the nebulous "taste" of each. That method would be allowable...as long as a person's sense of taste for reality is both finely calibrated and subjected to ongoing correction as needed. Sadly, these conditions are frequently not met. Some who imagine that their taste meets the conditions may in actuality be complacent egotists who're incapable of recognizing their taste's flaws.

A few comparisons will confirm that overrated gut feelings and "common" sense originate from personal contexts of experience and culture. Divergent experiences and cultures produce divergent gut feelings and common sense. An infallible ability to sniff out reality wouldn't emerge unless the sniffer were blessed with an infallible context. That's...extremely improbable. Someone in an earlier era might have confidently said, "My long-nurtured impression is that it's quite proper to reserve the privilege and responsibility of voting to the kind of men who own land. Everybody with common sense is acutely aware of the blatant reality that the rest of the population cannot be trusted to make tough political decisions. Opening the vote to them strikes me as foolhardy in the innermost parts of my being."

But context is far from the sole way to affect taste. Propagandists (and marketers) know that repetition is an underestimated strategy. Monotonous associations lose the taste of strangeness. Once someone has "heard a lot" about an assertion, they're more likely to recall it the next time they're figuring out what to think just by fudging. Its influence is boosted further if it's echoed by multiple channels of information. For people who sort reality by taste, its status doesn't need to achieve airtight certainty to be a worthwhile success. Success is achieving the equivocal status that there "must be something to it" because it's been reiterated. A fog of unproven yet pigheaded assertions would be too insubstantial to meaningfully revise a model, but with enough longevity it can evidently spoil or sweeten a reality's taste by a few notches.

Repetition is clumsy, though. Without a model to narrow its focus, the taste for reality is susceptible to cleverer attacks. Taste is embodied in a brain packed with attractions and aversions. The ramification is that emotional means can greatly exaggerate scarce support. A scrap in a passionately moving frame manages to alter taste very well. In the midst of weighing perspectives by impromptu fudging, the stirring one receives disproportionate attention. If the scale has been tilted masterfully, someone will virtually recoil from the competing perspectives. Gradually establishing the plausibility of a model is a burden compared to merely tugging on the psychological reins.

If distorting taste by exploiting the taster's wants seems brazen, the subtler variation is exploiting what the taster wants to believe. It may be said that someone is already half-convinced of notions that would fit snugly into their existing thoughts. The desire for a tidy outlook can be a formidable ally. It's not peculiar to favor the taste of a reality with fewer independent pieces and pieces that aren't in dischord. The more effortlessly the pieces fall into place, the better. Purposefully crafting the experience that a proposal gratifies this component of taste is like crafting the experience that it's as sound as a mathematical theorem. It will appear not only right but indisputable. Searching will immediately stop, because what other possibility could be more satisfying? ...Then again, some unexpected yet well-verified models have been valued all the more as thrilling antidotes to small-mindedness.

This extended list of weaknesses suggests that compulsive fudgers are the total opposite of model thinkers. However, the universe is complicated, so boundaries blur. To repeat from earlier, model thinkers regularly have the need to fudge the admitted limits of the models. And the reverse has been prevalent at various times and locations in human history: fudge after fudge leads to the eventual fabrication of a quasi-model. The quasi-model might contain serviceable fragments laid side by side with drivel. It might contain a combination of valid advice and invalid rationales for the advice. The quasi-model is partially tested, but the records of its testing tend to be patchy and expressed in all-or-nothing terms. It could be passed down from generation to generation in one unit, but there's uncertainty about which parts have passed or failed testing and to what degree.

Once someone does something more than fudge in regard to the quasi-model, it might develop into a legitimate model. Or, its former respectability might be torn down. The dividing line between quasi-model and model is a matter of judgment. If it's resting on fudges on top of fudges, then signs point to quasi-model.

Monday, March 27, 2017

blotted visions

To repeatedly insist that a popular belief is inaccurate is to invite an obvious follow-up question: then why is it popular at all? And this question turns out to have an abundance of answers as widely varied as the believers themselves and the effects the belief has in their lives. One answer that has grabbed my attention recently is that a belief can serve a function like an inkblot test. People can use it as raw material for representing and processing their thoughts. It evokes strong reactions, which they observe and analyze. It verbalizes and conceptualizes common features of being a typical human.

The strategy is like dropping iron filings near a magnet to distinguish the magnetic field. Thereafter, they may feel dependent on the belief playing this illuminating role for them. It becomes their ongoing lens. The well-known example is morality. Given that their own conscience has always been seen through their belief, they unimaginatively assume that the lack of the belief-lens in someone else implies a lack of conscience too. Their symbols are "powerful" influences on them because the symbols' power comes from them and their mental associations. From then on, invoking the symbols is a means of self-regulating—or of others in society manipulating them, of course. A few chants or a few bars of a familiar song can deftly adjust moods...

I'm reluctant to indulge this idea too far in the Jungian direction. Picturing these inkblot beliefs as sharply defined entities in a collective unconscious doesn't feel correct to me. I don't consider the primal brain that sophisticated. I'd rather say that some long-lived beliefs have a time-tested flair for recruiting and orchestrating inborn instincts. The beliefs can give a framework for interpreting the instincts and providing particular channels (again, I hesitate to presume too much by calling the channels "sublimation").

It hardly needs to be said that the potent narrative pushed by a belief frequently distracts from its degree of accuracy. The relevant joke among non-fiction writers is "Don't let the facts spoil a good story". Captivating tales will spread rapidly regardless of the tales' (dis)honesty. Worse, stories confined by the bounds of rigorous honesty are at a disadvantage compared to the ones that aren't. Details tend to be imprecise, complex, and dispassionate, so stories with verified details tend to be more challenging and off-putting. As condescending as it sounds, for a lot of careless people in history and right now, an uncluttered story that rouses a profound interest in them is more effective at ensnaring their loyalty than a tangled story that's authentic. And the boost in effectiveness is virtually guaranteed if corresponding interest was intentionally cultivated in them over and over by people they trust. It's worth noting that the interest might not be baldly self-gratifying; it might be interest in having a simplistic, conquerable scapegoat.

Clearly, the outcome may be positive or negative when a belief is able to stimulate people's drives and/or reflections like a carefully-constructed inkblot. Nor is the basic technique unique to one category of belief. In physics, "thought experiments" exercise analytical understanding. In philosophy, an insightful intellectual has referred to similar contrivances as "intuition pumps". Numerous mythologies and fables are expressly repeated not to convey historical accounts but to render an earnest lesson as vividly and memorably as possible. But creations that aren't as lesson-focused still could be written with the goal of kindling intriguing discussions. Creators choose to employ certain words and images which they expect to act as subtle yet meaningful shorthand.

The depth of impact ensures that subjects remain susceptible to analogous inkblots long after they discard the belief. It doesn't take much for a newer instance to be reminiscent of the old. I experienced this myself when I went to Logan a little while back (were you thinking that I'd mention Arrival instead for this topic?). In this movie, Wolverine is a protective superhuman who voluntarily undergoes bloody torture to the point of death, for the sake of people who cannot save themselves. He presents himself to be pierced in place of them. He's very old. He's emotionally remote (to say the least), but once his commitment is made it's ferocious in its determination and it doesn't expire. He's closely acquainted with pain. His body doesn't deflect bullets and sharp objects like some superheroes'. After sustaining wounds that would normally kill, he just has the power to rise up again—er, almost always. Eventually he's a substitute father figure who advises against being monstrous toward others.

Nevertheless, he's someone who knows his dangerous character faults and his many mistakes...and also someone who, after some convincing by a respected authority, seeks to do what he can to redeem himself, mitigate consequences of his existence, and repay the kindnesses he's received. He has the hope of reducing the chance that someone else will endure a life like his. He's in a battle—and the movie makes it extremely literal—against the frightening aspects of himself. He may not be "reborn as a new man", but he's prodded into behaving in changed, productive ways. He transforms from isolation and despair to a renewed mission of improving the parts of the world he decides he must.

I was surprised by the intensity of my spontaneous responses to these inkblots, like the gushing of water flowing through a rut. Temporary appearances by these latent sentiments weren't sufficient to overturn my established judgments. But I was forcefully reminded that the beliefs I dropped, despite having critical flaws of all shapes and sizes, had exploited a striking capacity to "worm in" to my sensibilities. Although I snapped myself out of it years ago, on occasion I can recognize the unabashed appeal of impulsively clinging to something that "speaks to you" and appears to be embodying "truths too deep for words"...albeit only with the precision of an inkblot.

Tuesday, February 28, 2017

surveying the chasm

Identifying with a group doesn't stop me from critiquing the attitudes and customs of some who are in "my" group. This was also the case when I identified with the religious groups of my earlier years. I despised the rampant traits of ascribing the worst motives to anyone who doesn't believe in an identical god concept, reflexively distrusting anything unfamiliar, insisting on unwavering conformity to the smallest of doctrinal trivia, and so on...and so on...

The grievance I have with a few atheists online is their pattern of communicating as if an unbridgeable chasm separates them from everyone who disagrees with them. Rather than asserting that their spirits have become holier than thou, they implicitly assert that their elevated thinking processes have, without exception, become "sounder than thou". The poor wretches on the chasm's remote side aren't like the sharp-witted people in the group. Those oafs are more or less guaranteed to suffer from confusions of all shapes and sizes. Through oppressive "faith" they're apt to adopt awful ethical principles and/or absurd statements. They might be described as objects of pity who were entangled in psychological traps during the gullibility of childhood. They do ponder about things, but they're unable to really comprehend and revise their mistakes. They don't notice self-contradictions. All expectations for them are lowered. When a low expectation is publicly met—perhaps published via a sensational, eagerly exchanged internet article—the usual arch reaction is, "No bombshell here, eh, am I right?" Commentary might feature terms like superstition, fairy tale, tooth fairy, Santa, magic, irrational, tribal, sheep, regressive, and of course hypocrite.

I recognize upfront that the unattractive tendency I purposely exaggerated isn't present in everyone who follows, tightly or loosely, my philosophies. Actually it might be mostly confined to a disproportionately loud, attention-grabbing minority. Or maybe it's more widespread but in a more moderate and tacit form. I need to watch out for when I slip into it occasionally.

Obviously it leaves a deeper impression on me because I started on the opposite side of the alleged chasm. Plus, I regularly interact with people who are "there" now. I'm motivated to mark our differences using more levels of contrast. I and a lot of other apostates know that we ourselves once appeared to live for a prolonged period on the old side even as we concealed our shifting sympathies and embryonic doubts. If there were a chasm, then aspects of us were already halfway over it, which led to us feeling like we were the odd ones.

Admittedly, there are significant numbers who haven't ever had these internal struggles to a comparable degree. Their entire selves are casually intertwined with one side. No part of them is receptive to alternatives. Staying put is as involuntary and vital as breathing. They themselves may be openly unconcerned by the prospect of chasms between them and other subcultures—they may insist on it. ("We take for granted that we're on the right track if we think and act nothing like you.") I can see how being around them often enough would entrench a chasm mindset.

Then there are the somewhat innocuous believers whose supernatural perspectives are fluid/informal, or fragmentary/unassuming, or almost totally irrelevant to their lives, or constructed by them from out of the miscellaneous sparkly bits of more complete beliefs. Each of their concrete opinions and values, evaluated purely in isolation, might bear a closer resemblance to the people who are said to be across a chasm from them, than to the radical believers whom they are said to belong with. They're strong candidates for joining together in causes in common.

Broadly speaking, I don't find it sufficient to represent a wildly varied assortment of views and people with the repugnant examples alone. I'd prefer constant acknowledgment of the challenge of making summary judgments about all the diverse paths people take to deviate from materialistic naturalism. The majority of these paths are (or derive from) the abundant products of unrestricted group-facilitated creativity, socially reinforced and embellished for generation after generation. In fact, it's difficult to validly address a solitary subcategory, Abrahamic beliefs and believers, without first imposing narrower conditions on which segments are being addressed.

I should clarify that my wish for fewer prejudicial generalizations is more about style than content. I'm not reversing my position about the other side's inaccurate notions. An error or misdeed can be called what it is. And although I'm maintaining that not all of my dissimilarities from all of that side's occupants are wide as chasms...I'd say an important gap does set the sides apart. In my reckoning, two attributes pinpoint the definitive disparity between Us and Them.

The first attribute is wary but expansive curiosity. This species of curiosity reaches out to a sweeping extent of well-grounded information. It's the willingness and hunger to draw from any source that has clear-cut credence. It's not unfiltered absorption of baseless speculation or hearsay. It's considering unlikable information without immediately rejecting it and considering likable information without immediately pronouncing it legitimate.

The second attribute is conscientious introspective honesty. This species of honesty is shown by persistently weighing the worthiness of personal thoughts, especially when the thought is dearly held. Honesty, e.g. not looking away, is essential twice: honestly examining thoughts below the superficial layers, then honestly grappling with the authentic evaluation. Depending on the person and the circumstances, they may not heed this attribute's tough demands until they're presented with the chance multiple times.

The details make the difference. I'm not proposing that curiosity and honesty are foreign to Them, only these precise forms. Or, as it was with me in the past, these forms could be operating in deceptively restrained states. In Them, expansive curiosity is prevented from being too expansive. Honest introspection is conscientiously carried out but not too conscientiously. Boundaries surround the safe territory. Some commonplace questions have whole sets of rote replies. They serve as tolerable escape valves for the inner tensions caused by nagging doubts. But unanticipated questions that cut too deeply are taboo—and some radical replies to the permitted questions are taboo.

Labeling such people on the side of Them isn't a shocking consequence of a rule that strictly ties the gap-not-chasm to the two attributes. But I'm fully aware that it relabels another group entirely: people who may agree with a great deal. The gap that's more meaningful to me is how conclusions are obtained, not on the conclusions. Concurring with me on selected subjects isn't quite enough evidence that we think alike.

It can't be assumed that the two decisive attributes are appreciated by someone who by chance has never been steered toward supernatural stances, or was actively steered away by the pressure of their in-groups. Their distaste for particular ideas may be as externally guided ("cultural") as my bygone loyalties to the exact same ideas. Or maybe they were pleased to drop the ideas because their disposition is inclined to be contrarian, nontraditional, or rebellious. Or maybe they were driven out by uncaring treatment and senseless prohibitions. These reasons and personal journeys aren't automatic disqualifications; if they still have the attributes I'm looking for then they're fine in my outlook. If not...they probably have my support anyway, but my ability to relate to them will be reduced.

Additionally, whenever people have followed unsystematic routes to proper conclusions, the chances are higher that they'll follow those routes to improper conclusions regarding other topics. Experience shows people's surprising ingenuity at harmonizing a "right" answer with plenty of "wrong" answers: examples abound in political discussions. To be correct about ____ isn't to gain universal immunity from error. As I've read again and again, if every religion vanished then people would fill the vacuum with poorly grounded beliefs of other kinds such as conspiracy theories and pseudo-medicines. Every day, lots of nonreligious people unfortunately fulfill this trite rule of thumb. (It deserves reiterating that this predicament isn't an excuse to decrease skeptical criticism of many types of religion. The excellent reason why these targets have been more frequently hit is that these beliefs have been more methodically spread, embedded, handed excessive power, and involved in one way or another with awful dehumanizing ethics.)

Yet the potential shades of gray don't stop here either. Time can be a factor. Attributes aren't necessarily permanent but are demonstrated anew by the ongoing project of reapplying them. They could wax and wane. Or they could be impaired by individualized blind spots. Someone can unintentionally fail to engage their curiosity as much as they could have or honestly pay as close attention to the underpinnings of their thoughts as they could have. The lesson is that sometimes there barely is a gap at all between the quality of justifications employed by Us and Them...much less a chasm between people whose brain functions have and haven't "ascended" to a superior enlightened plane. We remain Homo sapiens aspiring to possess the most accurate ideas we can find.

Monday, February 13, 2017

android deluge

Months ago I recalled an obscure catalyst of my gradual de-conversion: wrestling with the arguments of the "emerging church". But I recall another one too that isn't usually featured in de-conversion stories. My hazy thinking was prodded forward by a deluge of androids. I'm referring to machines intended to exhibit a gamut of person-like characteristics, from appearance to creativity to desire. At the time in 2008, the specific instances with the greatest prominence to me were in Battlestar Galactica (2004) and Terminator: The Sarah Connor Chronicles. These two aren't uniquely important—obviously so, given that they were revivals of decades-old creations. The deluge of androids started in fiction long before these two. And it certainly has continued since, across a variety of media. By my estimate it's not lessening in strength...

It hardly needs saying that I'm not claiming that imaginary android characters proved or disproved anything. The critical factor they contributed was to broaden and direct imagination. They implicitly and explicitly highlighted the standard philosophical thought-experiments. "If an android were sufficiently advanced in its duplication of human thought and behavior, would its 'mind' be like ours? If not, why not? Is there a coherent reason why it becoming sufficiently advanced is impossible?" Some of these questions have been connected to "zombies" instead of androids, but the gist is the same.

As the unreal androids kept nagging me with the questions, my reading was providing me with corresponding answers. I was busy digesting two well-grounded premises, each of which are routinely confirmed. First, the elemental ingredients of humans are no different from the elemental ingredients of non-human stuff. The human form's distinctiveness arises from intricate combinations of the ingredients at coarser levels and from the activity those combinations engage in. Second, like I said in the preceding blog entry, information is encoded in discrete arrangements of matter (and/or energy flows). Ergo the details of the matter used in the arrangement aren't relevant except for obligatory requirements on its consistency and persistence. Information is perpetually copied/translated/transformed from one arrangement of matter to another. DNA molecules house information without having consciousness. The ramification of fusing the two premises is that because hypothetical androids are made of matter like people are, they're capable of manipulating information like people. This doesn't imply that it'll be easy to construct androids that encode information with comparable subtlety.

To admit this much is to invite the next epiphany. The perspective is reversible. If they're enough like us, then we're like them. Presuming that androids' intellects can function as artificial variations of people's intellects, couldn't someone with a twisted mentality—a sufficiently advanced android maybe—regard people as the original biological templates for androids? Calling the suggestion dehumanizing misses the point. Being a "conscious machine" all along, constructed from cells in place of gizmos, doesn't subtract from our subjective experience one bit. The experience of freedom is accessible to self-virtualizing robots.

Saturday, February 11, 2017


Last time, inspired by Sean Carroll's big picture look at philosophy, I repeated the big picture which I've expressed here before. As described by information theory, ideas in the loosest sense are symbolic arrangements of matter and energy flows. Brain matter has evolved accordingly to act as a flexible channel for the reception, assembly, modification, and storage of ideas.

Countless energy-consuming actions in the brain link ideas together. The links enable the ideas to be "hints" for one another. I'm not the first to suggest the analogy of a crossword puzzle. Each's answer's written clues might be vague, but the intersections of the words are crucial clues too. A strongly certain answer reinforces (or negates) the correctness of the answers that reuse its squares. The importance of context shouldn't be underestimated.

Through these linking actions, people will inevitably identify some linked ideas, endpoints, which should be relevant in some way to external actions. By "external" I merely mean that the actions don't happen solely inside the brain. The absorption of information via eyes and ears would be enough to qualify. Actions lead to outcomes, and outcomes are deeply affected by realities. Afterward, people can judge the level of agreement between the outcomes and the endpoints. But this isn't the whole effect. Based on the already mentioned links, their revised judgments of the endpoints' accuracy should revise their judgments of the linked ideas' accuracy. Ideas, actions, and outcomes are shaped by a triangle of mutual relationships.

I recognize that this big picture will provoke complaints. It grants a disappointingly mundane status to ideas and then it pairs this demotion with a sizable role for error-prone people. Wouldn't it be preferable if ideas were said to be unchanging and independent? That way, at least a few ideas exist "out there" by themselves, apart from relying on squishy, messy humans. This alternative is to insist that ideas are sturdy things

My view is that the sturdy-thing notion of ideas does have a diminished counterpart...but only through accepting a subtle redefinition. Instead of an idea having a quality of sturdiness, it can be evaluated by the number and intensity of the convergences it's involved in with other ideas. When a swarm of small easily-checked endpoint ideas have been tested as highly accurate (facts), and all these align well with a single general idea, it's like these ideas are converging on the single idea. The single idea represents a valuable summary, trend, or explanation. Brain actions such as deduction might produce convergences as well: if several axioms and proofs yield a single idea, then it's a valuable theorem. The ultimate result, after tallying an idea's convergences, is to situate it on a relative continuum. A hub idea involved in a multitude of convergences of various kinds is precious. But an isolated idea that's diverged from is suspect. 

Unfortunately, raw convergence isn't irrefutable. It carries its own inherent risk: it isn't necessarily universal. Its scope is possibly limited, even when it's quite dominant for ideas in its scope. Survey responses gathered in Connecticut could converge to an idea, but it might differ nonetheless from the idea that survey responses gathered in British Columbia converge to. Paying close attention to scope is just a price of replacing sturdy-thing ideas with convergent ideas.

But before fixating on the perceived inadequacies of ranking ideas by convergence, my advice is to methodically take an inventory of what it gives up by comparison. An idea that has been converged to many times, in many ways, is an idea that is very likely to be converged on once more. So it's probably beneficial for planning on the outcome of future actions. An idea that hasn't contradicted high-quality ideas is an idea that is very likely to not flatly contradict additional high-quality ideas. So it's probably beneficial as a lens for comprehending proposed ideas. An idea that has succinctly captured the pertinent similarities in a series of repetitive events is an idea that is very likely to echo the pertinent similarities in upcoming events in the series. So it's probably beneficial as a prediction or model of hypothetical events in its scope.

I'd say that the list of drawbacks is looking insignificant. For most purposes, a heavily convergent idea and a sturdy-thing idea are alike. The reason is that convergence is part of the original concept of a sturdy-thing idea in practice. On the assumption that an idea itself is a sturdy thing, then ideas/actions/outcomes would be expected to converge on it. The difference is whether convergence is interpreted as secondary to the idea or interpreted as defining its actual extent. Particular actions can't distinguish between the two interpretations. Perhaps the situation is reminiscent of a (positive) bank account balance. The account's owner can take the action of withdrawing currency from the bank account no matter what the account "really" consists of. For withdrawals the bank account is like a stack of currency in a locked drawer—although the equivalency doesn't work at a failing bank.

The prospect of agreeing to humble ideas could spur the forgivable question, "If not ideas, then what is considered sturdy?" And the answer is lots and lots of real stuff. The milk in my refrigerator is a sturdy thing. My idea that the milk has soured isn't. This idea is linked to the endpoint idea that in the near future I open the milk container, hold it close to my nostrils, inhale deeply, and experience a sensation of odor. The idea of the sour milk is linked to more endpoint ideas such as requesting that someone else sniff so I can watch their reactions. Depending on the outcomes of these endpoints, the idea of the sour milk might be a convergent idea or not. The milk's reality is gratifyingly sturdy. It affects the amount of convergence which my ideas about it have. The same may be asserted about a more "existential" idea about the milk: is it still in the refrigerator, or did some obnoxious household member empty it without telling me? My ideas about the milk, presently occurring in my brain, don't dictate whether the milk is now elsewhere. The actions I take won't imply that I'm finding the idea that the milk is elsewhere but that I'm thinking the ideas associated with realizing that I'm not finding the milk.

If ideas regarding soured milk seem far too frivolous, Sean Carroll's writing contains a fitting candidate which is definitely not. His "Core Theory" is an immensely convergent collection of ideas. Moreover, as he painstakingly explains, its confirmed scope is immensely broad. People's typical lives are within it. In effect researchers and engineers are rechecking it repeatedly as they act. It's not a sturdy thing...but nonetheless we're metaphorically leaning on it all the time.

Saturday, January 28, 2017

swapping pictures

It's a bottomless source of ironic amusement that intellectual justifications for religious beliefs usually appeal most to the beliefs' current adherents. They relish hearing their cherished ideas defended and reconfirmed by multitalented communicators. And...I suppose I do too. That's probably why I took the time to read Sean Carroll's The Big Picture: On the Origins of Life, Meaning, and the Universe Itself and enjoyed most of it. I wish it all the popular or critical success that it can get.

With that in mind, it's no surprise that the book's major points weren't that groundbreaking or earth-shattering to me. His Big Picture is in harmony with mine. Nevertheless I appreciated Carroll's articulate and organized delivery as well as the specifics he laid out. I already knew about some of these supporting details and arguments—it's not my first exposure to these topics—but I learned some too. Obviously he can apply more physics knowledge to the Big Picture than I can, similar to how a neuroscientist can apply more neuroscience knowledge, or a philosopher can apply more philosophy.

As I see it, his model of "poetic naturalism" is consistent with my (poorly-named) "Pragmatism-ish". He proposes that there are diverse ways of conceptualizing reality. This diversity should be accepted but only on the overriding condition that each one can be mapped onto another without contradiction. Throughout these ways, ideas should be gauged with likelihoods that adjust appropriately as more samples of reality are taken. Likelihoods that aren't inherently 0 (logical impossibility) or 100% (logical certitude) shouldn't reach these extremes, yet likelihoods can and do reach values that are close to either pole.

Likewise in Pragmatism-ish, I've proposed that ideas, actions, and reality are in a triangle of relationships. Each of the three shapes/restricts/informs the other two. (I'm using the word "idea" in the most inclusive sense, so it might be a perception, concept, statement, hypothesis...) People perform the mental action of determining that if an idea A is likelier than not then idea B is likelier than not, if idea B is likelier than not then idea C is likelier than not, etc. These connections form a web (or network) of ideas. Eventually the web extends to ideas which may be termed "endpoints": ideas that should be expected to, with a particular frequency, match particular outcomes of particular actions really performed.

Once people verify these matches, or fail to, they can take the further action of judging what some of the ideas in the web mean as well as the ideas' accuracy. To probe a supposedly meaningful and accurate idea is to question how it's ultimately grounded. What is it connected to within the web, and what relevant endpoints have passed or failed fair tests? I've sometimes referred to measuring an idea's meaning by its "verified implications". A folksy version, which is so short that it's vulnerable to numerous willful misunderstandings, is that truth needs to works for a living.

Like Carroll, I would say that ideas containing an element of subjective experience can be valid as long as those ideas are kept in their correct place in the web alongside other more objective ideas. Then the limitation of subjective experiences is always evident: the experiences are events that happened in one subject's body (including their brain). The idea is still an expression of something real if it's understood to occur as a movement of the matter in that body.

Complexity is inevitable when the sifting process is conducted with the proper care and labor. There are many possible cases. Ideas with "tighter" connections to verified endpoint ideas merit more confidence than ideas with looser connections. On the flipside, ideas with no connections merit little. These freestanding ideas, which may be "freestanding" from the rest of the web because they blatantly clash with well-grounded ideas, are like when Carroll's statements in one domain don't map onto other domains. Another case is that an idea has meager meaningfulness because it connects equally well with opposite outcomes, so that in effect it's asserting nothing of consequence. Then there's the case of multiple different ideas connecting to the same endpoints, so that there's some rationale for claiming that the ideas share a meaning. Skilled translators watch for this kind of subtlety.

The abstraction of a web of ideas has a resemblance to Carroll's vivid "planets of belief". I admire his metaphor. It should be spread. The suggestion is that, within an intellectually honest curious person, compatible beliefs gravitate together to form a planet. Combining incompatible beliefs results in a tension-filled unstable planet. Outside influences can affect the planet's stability causing chain reactions and tectonic rearrangements. Under some circumstances, it can be prodded into breaking apart and reforming into a novel planet.

Some could object that Carroll ventured too far outside of his designated area, and he should've left topics beyond physics alone. My response would be that his readers should know better than to assume that he or anyone is able to cover the Big Picture comprehensively in so few pages. By necessity it's an overview or a taxonomy. Readers with greater interest will be doing their own follow-up reading anyway.