Showing posts with label Book Responses. Show all posts
Showing posts with label Book Responses. Show all posts

Sunday, July 19, 2015

sit a spell in the Silver Chair

And this time it didn't come into her head that she [Jill] was being enchanted, for now the magic was in its full strength; and of course, the more enchanted you get, the more certain you feel that you are not enchanted at all.
My impression is that The Silver Chair is typically placed in the lower tier of the Chronicles of Narnia series. Part of the reason may be the unsettling villainess, namely the witch in green. Her alluring appearance and mostly-cordial composure are only masks. Like her realm of Underland, greedy malevolence lies under the mild surface. She's a patient schemer whose impulse is to work in secret.

Truth be told, plainly she doesn't need clumsy, aggressive threats of force to achieve domination. Her well-suited style of bewitching magic is psychologically manipulative and overpowering. Why would she wastefully assault her enemies' bodies when she can either mislead them or infiltrate their souls, thereby seducing them to defeat themselves? To seize Narnia's Prince Rilian, she doesn't overwhelm him with a contingent of fighters. She gradually fascinates him. She captivates him to make him a captive.

Similarly, his ensuing entrapment in Underland doesn't involve violence or intimidation. His magically mediated self-betrayal progresses to a worse stage due to regularly scheduled sessions bound in the witch's potent Silver Chair. In essence he's no longer himself. He's shifted into a complete second persona. His former memories, motivations, and disposition are displaced. Throughout the day, Rilian's mentality is confined so masterfully that he's mostly oblivious of the difference. The author reiterates this obliviousness a few times to ensure that readers note this characteristic of the witch's powers. It may be an allusion to the author's recurring theme of moral desensitization: frequently committing wrongdoing, or just fantasising about it, can cloud perception of its wrongness.

So much for the author's intentions. In fact, Rilian's second life under the sway of the witch and Chair is a striking multifaceted illustration of living under the sway of religious inculcation.
  • He's gushingly committed and grateful to the witch, i.e the designated authority over him. He trusts the authority wholeheartedly. He believes earnestly in the statements made by the authority, even though he can't explain exactly how the authority got that knowledge. The authority has extraordinary abilities ("magic arts") that he can't possibly duplicate or evaluate for himself, so he doesn't feel able to question.
  • He has been handed a tidy script of expectations to fulfill. His destiny to capture Narnia for Underland. His free will—such as it is—revolves around his willingness to conform, not the individual freedom to chart and assess his course. He's been told what role he will play and how. Equally clear is his present and future hierarchical position; commands will flow from the witch to him and from him to his inferiors. On some level his freedom persists, but external forces have tampered with it.
  • The witch takes him on short trips to preserve his acclimation to the surface, e.g. the intensity of sunlight. Being a prisoner, his trips entail severe conditions. He's forbidden from displaying his face or speaking. These preventative countermeasures embody an attitude of minimizing and filtering unavoidable contact with the frightening outside world. This same attitude is demanded of religious followers. They're cautioned from weakening their faith by paying attention to unvetted sources of information or by studying alternative viewpoints. It's worth noting that this worry isn't far-fetched. I've previously mentioned that my uninteresting background didn't feature ruthless cultural isolation, and indeed my steady absorption of contradictory information was key to ultimately discarding my parents' faith-beliefs.
  • Like the rigidity of followers' attitudes toward outside ideas, the rigidity of their habitual rituals is shrewd. These repeatedly reinvigorate their faith-beliefs, just as Rilian's periods in the Silver Chair reinvigorate the twisted roots of the thought patterns imposed on him. These prescribed doses of "spiritual relief" are indispensable to reinforce the desirability of their specific concepts. The opposite tendency is unrelenting, because observable violations of faith-beliefs inevitably accumulate. It isn't rare to hear devoted attendees comment that their weekly activities renew their faith or to hear them warn that erratic attendance would endanger their faith.
  • That said, the comparison isn't perfect. Most obviously, Rilian must be tied to the Chair. In each sitting, his preexisting self and his central memories temporarily surface. Some of the book's most moving lines of dialogue are his desperate pleas to be released before he loses himself again. Followers of faith-beliefs, some more than others, have comparable episodes of clarity and candor. They may not be nearly as horrified as Rilian about how they've spent their time nor nearly as eager to drop their comforting beliefs. But perhaps they're haunted by the meddlesome pair of questions, "What if my faith-beliefs have been inaccurate all along?" and "Precisely what indicates that my faith-beliefs probably aren't inaccurate?" Those are the occasions when they're more willing to pay sincere attention to the arguments against them, and they're temporarily less inclined to brush away the holes in their own arguments. Debates don't need to convince them immediately; hearing the faults expressed is preparation for a hypothetical future hour, in which they'll abruptly stand up, look around, and deliberate about the soundness of their thinking without the Silver Chair's interference.
  • Lastly, he's courteous and quick to laugh and smile. The problem is that a short amount of conversation with him is enough to reveal that he's selfish, patronizing, boring, flippant, and stubborn about his opinions. He deflects. He's positive but the cost is refusing to ponder anything that might counter his perspective and assumptions. Unfortunately, this demeanor is reminiscent of some irksome followers of faith-beliefs. They're detached and consumed by their image of a happy and proper paradise. Their inoffensiveness is mixed with hastiness to devalue anything or anyone who they consider below their concern and their strict, inflexible standards.
Besides the metaphor of Rilian's spellbound lifestyle, two more topics are obligatory during discussion of this book. The first is Puddleglum's speech to the witch, who moments ago had nearly succeeded at mystifying the whole group of heroes about the existence of anything above ground. In tightly condensed form: "Suppose we have only dreamed, or made up, all those things [...] the made-up things seem a good deal more important than the real ones. [...] I'm going to stand by the play-world. I'm on Aslan's side even if there isn't any Aslan to lead it. I'm going to live as like a Narnian as I can even if there isn't any Narnia. [...] we're leaving your court at once and setting out in the dark to spend our lives looking for the Overland. Not that our lives will be very long, I should think; but that's a small loss if the world's as dull a place as you say."

Sometimes his speech is portrayed as a stirring counterpoint to all kinds of atheism. In 2005 I might have agreed. Now, I can't stop noticing the shaky premises it rests on.

  • If it's interpreted to mean that goals and ideals depend on faith-beliefs, then I've already responded to that. Of course faith-beliefs aren't necessary to envision improvements to realities. Moreover, the odds of attainable progress increase when goals and ideals aren't kept separate from accurate realities. Certainly one can emulate Narnian behavior without believing in Narnians by faith. 
  • If Pud's speech is treated like the claim that one's realities are in the realm of personal preference, then I don't accept that either. Realities routinely violate personal preference, to put it mildly. Humans' preferences don't adjust realities telekinetically—no, psi energy doesn't appear in the equations of quantum mechanics. But humans are manifestations of matter who can participate in causing changes to other matter to better fit their preferences. (This doesn't deny the instrumental effect of their chosen mode of reaction on how current realities disturb their thoughts.) 
  • If P-glum's words are understood to be serious essential downsides of not relying on faith-beliefs, I would disagree. Generally those downsides are uninvestigated prejudices. Reaching a negative conclusion about a faith-belief doesn't imply that one is negative about everything all the time. Examining rather than surmising a notion's likelihood doesn't imply that one lacks sufficient imagination for the notion
  • If one follows the symbolic lead of Puddy-g by pronouncing the verifiable world "dull", many who share my stance would recommend additional closer, curious, nonjudgmental peeks. Then the world might not seem dull enough anymore to justify futile attempts to intertwine it with a world of faith-beliefs.

Moving along, the second obligatory book topic is Aslan's hurried "signs" of guidance to the heroes: curt instructions for them to carry out the mission while the lion is, er, somewhere else doing something else. Maybe his absence from the bulk of the story is a factor behind its lesser popularity in the series. An immensely capable and compassionate entity leaves behind Delphic (oops, wrong mythology) sayings. The recipients are assigned to conscientiously obey all the sayings. But since Aslan isn't more communicative either that time or later on, they themselves bear total responsibility for refining and applying the sayings' meanings. They anxiously debate with each other to sort out various missing details that would've been extremely helpful to have known for sure in advance.

Golly, I can't imagine why followers of faith-beliefs aren't more enthused and flattered by such an analogy...

Sunday, August 03, 2014

uncertainty is a participation ribbon

Without a doubt, I knew beforehand that I wouldn't agree with every point in Frank Schaeffer's mishmash, Why I am an Atheist Who Believes in God: How to give love, create beauty and find peace. And my expectations were met. My reactions are as inconsistent as the ideas expressed. Like Schaeffer, I dismissed my faith-beliefs without considering them totally worthless. We're in agreement that the Bible is packed with factual inaccuracies and antiquated moralities. We accept the statements of scientific consensus. We reject the claim that contemporary society should regress to the cultural mores advocated in the Bible. We may have similar political views, but his blogging is obviously much more politically focused.

Thereafter the philosophical, or perhaps psychological, differences start to pile up. I dismissed my faith-belief's activities some time after dismissing the corresponding faith-beliefs. Such activities would now clash with my innermost thoughts. I don't have the same old need or desire to continue them. In fact, almost any other category of activity seems more valuable and enjoyable to me. But Schaeffer matter-of-factly confesses that he prays and attends religious services, due to both ingrained compulsion and ongoing appreciation for the experiences' flavor and good intentions. In one section he lightheartedly compares them to bowling regularly.

That's fine with me. He can spend his personal time in whatever frivolous ways he likes, assuming of course he isn't harming anyone else. Likewise, one's chosen identity isn't thrown into actual contradiction by singing Christmas carols, or LARPing, or reenacting Civil War battles, or reciting the dialogue of Puck. The problems only start when someone fails to isolate these fanciful roles within a sharply delimited context...

I might even be glad that he routinely performs religious activities, if the simple effect is encouraging kindness and the contemplation of life through greater perspective. His book more or less portrays "Christ" not as a man or god but as a kind of storied avatar of concepts such as broad inclusiveness, equality, rejection of biblical literalism, compassion, and anything else Schaeffer approves. Hence he suggests that Scandinavian countries merit the label of "Christian", and the Enlightenment qualifies as an implicit "heresy of Christianity".

I suppose that I can see his point. However, the semantic gymnastics strike me as fruitless. Sure, someone certainly could "take back" the myth of Christ from traditional churches, and refashion it in order to link it to new things. But what does that gain? Who cares about ensuring that link? Why not allow an upstart to be good without "christening" it, so to speak? Must this be another case of "meet the new boss, same as the old boss"?

Still, the gap between our differing approaches to religious activities is less extensive than the chasm between our differing emphases on uncertainty—or "mystery" if the speaker wishes to sound wise and impressive. My inclination is to compare uncertainty to a participation ribbon. When I was a young child, participation ribbons were part of Field Day: an annual school event held outdoors. Field Day included quick individual competitions in which the top three received a designated (cheap) ribbon. Nevertheless, everyone in the class who was present received at least one ribbon for their participation in Field Day. Participation itself was an achievement.

To a similar degree, acknowledging the uncertainty of one's current knowledge is the achievement of successfully showing up for the honest struggle to obtain accurate ideas. The recognition of possible uncertainty is akin to the steps before the first step of the Field Day's dash competition (a race so short that it was almost absurd). It indicates the participant's willingness to seriously judge the boundaries of their knowledge.

The opposite isn't confidence but thin-skinned arrogance: "My knowledge is so infallible that absolutely no pragmatic action needs to be taken, whether to 'verify' its implications or to seek out superior alternatives to it." Someone with exactly zero uncertainty is someone who cannot imagine improving their knowledge, so they don't participate meaningfully in the struggle to obtain accurate ideas. They're not lining up at the starting line for the dash. Rather, they sit on the side and brag that they would circle the school building five times if they would demean themselves to testing their speed in the dash. This is the state of mind which knows the answer with certainty before expending any mundane effort. It's generally called "fundamentalist" by the irreligious, although it surely isn't confined to self-identified Fundamentalists.

The comparison underlines several aspects. First, like a participation ribbon, uncertainty isn't a pursued prize. It's not an aim. It's utterly normal and unremarkable. It's more like a periodically performed measurement that constantly fluctuates according to specific justifications. Uncertainty is why statistical analysis matters and why verifications should be repeatable; otherwise, one or two checks could be flukes. It's why someone concedes that their knowledge is possibly revisable. It's why a credible experimenter attempts to discern and publicly disclose the weaknesses in their own experimental studies. Once someone pinpoints their sources of uncertainty, they can speculate about circumstances that could reduce uncertainty and enforce revisions to knowledge. Nobody needs to be proud of being uncertain. Nobody needs to speak as if the existence of uncertainty produces definite conclusions in response. While it's an essential prerequisite to placing knowledge in realistic context, uncertainty isn't precious by itself.

Second, like a participation ribbon, uncertainty isn't an endpoint. It's not a destination. It's not the finish line of the dash. It's not a signal that someone should immediately give up expanding their knowledge. It's a clue to what someone should do next. They can't eliminate it all at once but they can gnaw at it bit by bit. On the other hand, one of the hallmarks of realistic answers is the tendency to lead to all-new sets of questions. The work to reduce uncertainty might result in further uncertainties which are different and surprising. That's still progress. Now the searcher has better reasons to be more sure about the prior idea. Novel uncertainty doesn't cause frustration at not capturing "the final truth". It's an invigorating invitation to keep moving.

Third, like a participation ribbon, uncertainty doesn't demolish the notion of winners and losers. Everyone's participation in the dash doesn't imply that they will complete it simultaneously. As I keep reiterating, uncertainty isn't absolute. It's not a poison. The smallest speck of it doesn't ruin trustworthiness or erase past advances. Its proper use isn't to shut down debate. It doesn't grant equal legitimacy to every half-baked conjecture. It's not a rationale for saying, "I'm uncertain and you're uncertain, so we're both total fools who shouldn't ask each other how we defend our positions."

To the contrary, uncertainty is yet another distinguishing mark. It's directly tied to how the position was verified. If one participant's beliefs seemingly derive from their moods, then they would say that uncertainty springs from the wild oscillations between their moods. That variety of uncertainty is hardly equivalent to the ever-popular variety of mathematically precise and limited uncertainty within quantum mechanics, for instance. Wave-particle duality and Planck's constant don't somehow support the dangerous proposition that all human beliefs have been proven identically useless. Nor does it support the bizarre fantasy that human souls can remake realities by intentionally collapsing wave functions into a desired quantity.

Therefore, uncertainty of a belief isn't tied to the particular way that someone personally encountered the belief. Uncertainty is gauged by the belief's underlying chain or web of positive verifications. For example, I readily declare that I was never personally taught to believe in a Cosmic Turtle. Regardless, I don't dismiss it for the sole reason that I was never personally taught it. I dismiss it because I'm not convinced by a chain or web of positive verifications underlying it. My disbelief isn't wholly dependent on the "narratives" of my upbringing or anyone else's. Am I "uncertain" about the Cosmic Turtle to the extent that I cannot say that its absence of detection thus far forbids its (hidden) existence? Well, yes. And its current status is not too dissimilar from indetectable contemporary "mystery" versions of gods. In short, if folks like Schaeffer claim that I qualify as a "fundamentalist atheist" because my deep uncertainty about Great Theological Off-stage Mysteries leads me to dismiss them, then by their standard they qualify as "fundamentalist Cosmic Turtle deniers". Nobody should care whether someone was personally acclimated to this or that set of ideas. In any case, the more relevant question is how one's ideas are distinctively supported, not how they heard about them. Familiarity or unfamiliarity is not enough to either verify or falsify any specific belief.

Ultimately, disagreements about uncertainty aside, I don't have serious objections to much of the book. I can imagine far worse fates than vast populations acting like "atheists who believe in a god"...a god that does nothing more than embody carefully selected ethical ideals.

Monday, December 31, 2012

Wither and Frost

In the category of fiction by C.S.Lewis, the idiosyncratic That Hideous Strength doesn't place highly on most lists. Yet this wordy novel for adults encapsulates the author's recurring interests and opinions, and it expresses those ideas in a more engrossing way than his nonfiction. (I admit that his nonfiction became much, much less compelling after I dismissed my faith-beliefs.) This story contains a startling juxtaposition of collegiate/organizational politics, science fiction, medieval fantasy, classical mythology, study of language, and of course Christianity.

It contains two opposing sides engaged in an unequivocal struggle between Good and Evil. The Good side is aligned with benevolent celestial spirits and its culture and morality is traditional (more or less...). The Evil side is aligned with malignant terrestrial spirits and its culture and morality is a parade of horrors intertwined with amoral science and unhinged progressiveness. By the way, neither side is committed to a convincing ethic of human equality or democracy; it seems both Good and Evil demand their underlings to know and follow their preassigned roles. I should point out that the start is slow and semi-realistic, yet the characters and the events are increasingly bizarre as the plot proceeds. The outrageous climax is a quite literal deus ex machina. In one scene, a character writes propaganda articles for newspapers. In another scene, the omniscient narrator peeks briefly into the perspective of a bear.

The Evil side takes the form of an impersonal juggernaut of unrestrained power called the National Institute of Coordinated Experiments. At its deepest level, it's steered by two remarkable antagonists whose surnames are Wither and Frost. Naturally, they're two of the most memorable characters. These portraits of villainy are prime starting-points for dissecting some of the favorite themes of Lewis. As much as possible, I'll try to digest his mere (ha!) theology into standalone nuggets of insight.


Wither

Wither is the Deputy Director, the top day-to-day authority of the organization (and one of his unwritten duties is to neutralize the clueless official chief!). However, he wields his power in an unexpected manner: his leadership and communication style is astonishingly vague. He's not openly tyrannical in the slightest. He explicitly instructs his inferiors to act with "elasticity", i.e. serve how they can without conforming to limited job descriptions. He's easily irritated if anyone coerces him to act or speak bluntly. He meanders with his voice, extending his conversations with excessive courtliness until the other participant is worn-down. He also meanders on foot, walking around the organization grounds with no warning of his approach and no planned destination. He's usually friendly and polite to someone's face. Nevertheless, he expects all of his euphemistic "requests" to be obeyed without hesitation in order to prevent him from demonstrating his "hurt" at being ignored.

Wither is an example of someone whose public face is so well-developed that his personality is virtually split into pieces. His habitual mimicry of courtesy is so complete that he feels no need to direct his entire attention to his job. His mask is himself. The disconnected part of him that runs the machine-like organization is itself machine-like. Therefore Wither exhibits two related problems of human nature that Lewis highlighted multiple times.

1) Wither's interactions with others are insincere. For Lewis, insincerity wasn't a harmless social game. It was an insidious path of temptation to the greater problem of self-delusion. Through self-delusion, humans avoid acknowledging their actual motives and thereby also avoid acknowledging their camouflaged innermost "evils". They convince themselves that they're virtuous when even their virtue is underpinned by their beloved flaws. Furthermore, insincerity in society leads to shallow relationships built on passive-aggressive pretenses. Lewis recognized that pride and hatred have as many subtle expressions as love. Regardless of whether someone is religious, these are valuable observations and warnings about human deceptiveness. Sincerity and honesty in communities, including the "community" within a single brain, are values that are tied to the earnest pursuit of truth—a universal humanistic value.

2) On the other hand, perhaps Wither's outward insincerity is a surprisingly accurate reflection of a worse root problem, namely self-disintegration. Perhaps Wither no longer maintains a coherent and unified self-concept, so his thoughts and actions are a mass of contradictions. In that case his blathering managerial persona isn't lying when it gives the false impression that Wither is compassionate; while it's active that persona is being truthful about its own sentiments. Lewis identified disorder as a major characteristic of the natural human state. He emphasized the inborn tendency to drift away from an established idea. Without sustained training and effort, humans lose control over their spontaneous competing impulses. They're prodded to rebel against their ideals, no matter what those ideals happen to be. According to Lewis, the long-term inability or unwillingness to assert self-control eventually produces a pitiful result, when the original tendency toward disorder culminates in full-blown self-disintegration. At that point of no return, the human has ceased to be a unified decision-maker in any meaningful sense. There isn't a brain "executive" that issues overriding directives, or if there is then the executive doesn't retain power of command.

Lewis' intent behind this theme was to claim, "If you refuse to accept the rule of Christianity then you cannot rule yourself successfully." Apart from this abominable non sequitur, his cautionary notion of self-disintegration is valid enough. The human brain is packed full of multitudes of parallel energy-expending neural networks that show subconscious activity. And habits have physical form in these networks. So it's not far-fetched to notice that the subjective experience of consciousness is torn by inner conflicts which demand considerable mediation—"cognitive dissonance" is the preferred label for it. Moreover, it's also not far-fetched to notice that frequency  of activation affects the talkativeness of particular networks. Humans who don't "exercise" advanced brain functions, such as imagining future repercussions, certainly can't expect those functions to "win out" during decisions. Self-disintegration is a creeping danger to anyone who wants to live in consistent accordance with their chosen ideals, independent of how they choose to derive their ideals. Pragmatically speaking, cheap ideals without the verification of steady commitment barely deserve to be called "real".


Frost

Frost is more secretive. He mostly works in private, but he gains greater and greater prominence as the story unveils the sinister mainspring of the Institute. He's disciplined, direct, abrupt, and severe. Whereas Wither appears to be welcoming and harmless, Frost appears to be cold and menacing. When the two of them converse, he complains about Wither's roundabout speaking, emotional word-choice, and reliance on patient strategies. He's keen on stoic allegiance as opposed to camaraderie. Wither's face sometimes looks lifeless, while Frost's stony face (his eyes concealed by the light falling on his pince-nez glasses) sometimes looks empty of every shred of humanity. And his smile makes the effect worse.

Frost personifies an attempt by Lewis to create a reductio ad absurdum of one of his perennial grievances, "Subjectivism". Subjectivism, as described by Lewis, is the general belief that the qualities or values of objects arise from the observers but not the objects. That is, objects don't embody qualities. For example, when a diligent florist says, "My flower is beautiful," Lewis asserts that Subjectivism interprets the statement as a pure fact about the florist and nothing substantial about the flower. He argues for the opposite idea of object qualities which definitely exist, whether or not human judges agree. His frank concern is that if humans begins to think that these qualities are solely subjective, then they'll doubt the "real" existence of those qualities...with the dreaded final outcome of humans such as Frost who devalue those qualities altogether.

Subjectivism is Frost's philosophy and lifestyle, but it's implied that Frost was taught by a diabolical source. He in turn wishes to initiate others. Thus he gives several lectures on his beliefs. He insists that all emotions are "nothing more" than chemical/biological phenomena, so he encourages intentional rejection of the entire set of illusions. He preaches that feelings and the associated moral judgments are unnecessary impairments to clearheaded analysis and swift action. His ends always justify his means. In effect he's a Sociopath With A Cause, or an ideal pawn for his "dark masters". Part of his unforgettable training method is to systematically provoke revulsion in order to guide his initiates to ignore their instinctual biases.

Nowadays the entire topic puzzles me a little. Dramatizations aside, I don't feel threatened by Subjectivism. My response to it is analogous to my response to the charge of relativism. Subjectivism is a simpleminded caricature. Yes, I'm a heretic who believes that humane ideals are constructed by humans. At the same time, I reject the hasty conclusion that human-constructed ideals are defined without exception by the petty competitive interests of individuals or tribes (or voting blocs?). Given the human capability to develop and apply other sophisticated ideas, ideals could and do achieve the same sophistication. Ideals aren't constrained by originating and then existing within subjects.

But an objector may sputter, "You're missing the main point. Isn't this a blueprint for disaster? If things aren't inherently good or bad or pretty or ugly or prudent or foolish, then subjects could disagree!"

The pragmatist replies, "Yup." It's a pragmatic truth that subjects disagree often about countless objects. Perhaps they agree about the attractiveness of the proud florist's flower. Perhaps not. In either case, the subjectivity of the flower's attractiveness is inconsequential. Just as human subjects construct thoughts about objects, they select which subjective differences matter to them. (I suspect their reactions to the flower matter more to the florist.)

In more important cases like some behavioral ideals, humans feel that every subject should agree. They can accomplish that goal through the many methods that suffice for any sophisticated ideas. To start, they could explain, persuade, and debate. In cases of still-greater importance like bans of despicable acts, humans feel that every subject must agree. Hence they resort to enforcement and various deterrents.

Fortunately, a second pragmatic truth about real-world subjectivity intervenes. Since all humans are linked by common descent and the members of each cultural group are linked by common training during immaturity, most subjects agree about an interrelated collection of vital basics. Human subjectivity has a shared frame of reference, especially in the context of a homogeneous culture. This second truth is a clue for why the objectors to Subjectivism too readily assume that all of the subjective occurrences in their brains are "really" attributes of the objects. They're heavily adapted to their own frame of reference. (If they ever bother to refer to it they may use the unhelpful term "common sense".) Their ingrained subjectivity isn't distinguished from genuine objectivity.

I concede that, to his credit, Lewis sometimes alludes to the significant disparities among individuals and distinct cultures, although he repeatedly emphasizes every similarity he can find. Unlike me, he doesn't interpret these similarities as supporting evidence for a species-wide, deep-rooted evolutionary explanation. Indeed he jumps to the opposite explanation of a singular divine Moral Law imposed on every human soul; he opines that the discrepancies spring out of a universal urge to discard or replace portions of the Law.

Presumably, the threat of those possible rewrites of ideals are why Lewis thought that Subjectivism should be frightening. To the contrary, I now see that the refinement of ideals is a strength. Subjectivism is scary whenever subjects cannot be trusted, but humans have stumbled on effective pragmatic solutions to the problem of trustworthiness. The refinement of ideals can be treated carefully, i.e. democratically and peacefully and thoughtfully. In any case it's better than having ideals that never improve because of faux objectivity: "That's just the way it is."

Saturday, March 31, 2012

explanations à la David Deutsch

The Beginning of Infinity by David Deutsch contains some intriguing philosophy. One of its central propositions is that good explanations are essential to human knowledge; in fact, Deutsch redefines "knowledge" and many other words accordingly. He musters arguments to tear down competitors with relish: empiricism, positivism, inductivism, instrumentalism, postmodernism, justificationism, anything supernatural. Instead he aligns with general fallibilism and the outlook of Karl Popper.

My interpretation of pragmatism takes a similarly dim view of those competitors but via a more roundabout way. As I see it, a datum is never alone and objective meaning lies in isomorphism. In short, the importance of context shouldn't be underestimated. Information becomes meaningful for a human because the brain is the defining data structure. It verifies, validates, cross-checks, computes, tests, trusts. Information that passes through these filters of diverse physical and/or mental processes can be said to "work"1. The brain is the singular agent throughout who initiates, performs, and finally evaluates. Of course, in practice humans augment the brain all the time. Total self-reliance is too constricting to seek "truth that works, every truth that works, and nothing except truth that works" 2. It's fine that theorists and philosophers and mathematicians can sit in quiet rooms and ponder, but that approach has its limits! Calculating the circumference of Earth doesn't reveal what's on the opposite side...

Deutsch defines the quality of a good explanation: "hard to vary". My version is not as pithy: "good explanatory information permits many isomorphisms to other good information". This appears to be circular because it really is circular. The circularity is largely the point. It implies that there aren't any Ultimate Bits of information with inherent and infallible meanings3. Yet a suitably advanced and knowledgeable information processor can certainly derive meaning from an aggregate of pieces. For information, to be meaningful is to be codependent.

For example, it's a good explanation that matter is composed of elemental atoms. This explanation is good because it's isomorphic to an impressively vast group of experimental results in chemistry and physics. If someone proposes an alternative explanatory material, e.g. mana, then it's fair to ask how isomorphic that explanation is to the same experiments, and if not then why not. Note that my definition of "isomorphic" isn't intended to be mysterious. In some cases it could just mean "closely matching reliable facts". Explanations with many, small, tight isomorphisms are more firmly locked into a greater context of solid information than explanations with few, grandiose, loose isomorphisms. In Deutsch's language, the former are hard to vary and the latter are easy to vary. A single chemical equation is isomorphic to substances and quantities that are measurable and unambiguous. A single mana explanation is not4. Additionally, a chemical equation has isomorphisms to atoms and electrons, i.e. complicated orbitals that correspond to the rows and columns of the periodic table. An explanation based on mana doesn't.

Other than preferring different wording of the definition of a good explanation, I sense that I'm not as keen as Deutsch to emphasize a starring role for Reason with a capital R, especially if it's construed as a synonym for mere logic. I'd rather claim that applied reason is one of the important mental tools that humans use to tame reality or at least interpret it. It works alongside nonlogical guidelines like the human delight in mental economy, which is the strong preference for a minimal quantity of concepts with wide if not universal reach, as opposed to a myriad quantity of conflicting concepts with sharply constrained applicability for each. (Deutsch has much to say about explanations with expressive reach, but I may be more eager to acknowledge that reach is also a human craving.)

Fortunately, parts of the book seem to agree with me. It dismantles the famous Zeno paradoxes, and the dismantling amounts to the general principle, "Reality isn't held hostage to entire logical/mathematical conclusions." And that principle is well-known to anyone who applies complicated science because models must be used with care and understanding. Whenever a function is fit to a discovered set of data points, the function might or might not accurately predict every other data point; outside of the data set, there could be "phase changes" in reality that don't fit the function well. That outcome isn't a violation of reason. It's a reminder that good explanations are bounded by a data context. In a battle with good data outside those bounds, obsolete explanations should lose.

But which data is "good", and obtained by which "measurements"? Deutsch's answer is unsurprising: explanations must underpin data-gathering, too. A measuring device is presumed to be reliable because it was built using good explanations. A handful of "outlying" readings can be disregarded as errors because good explanations of statistics show that those readings are truly anomalous.  Data is simply unable to "speak" on its own. Explanations are necessary. Better and better explanations, which are sought iteratively and recursively, are the stepping stones to the titular "infinity" of knowledge.

That proposal is front and center in Deutsch's imagined Socrates dialogue, which holds up explanations as the fundamental units of understanding. And those fundamental units come with a fundamental method, criticism. Criticism is the evolutionary/progressive attempt to continually probe explanations for weaknesses, with the goal of then replacing the failures. Deutsch takes special care to contrast criticism with its opposite method, a catch-all called "justified truth". The latter is the misguided attempt to identify and defend truths that are surely true and cannot be false.  Fallible explanations are confirmed or falsified, but in any case never are off-limits from attacks by critics. Some of the more self-congratulatory sections of the book are detailed praises for the societies/cultures which have more tolerance for the clamor of diverse ideas, especially the iconoclastic ideas that challenge authority or convention.

While I find that line of thought convincing, once again I'd analyze and state it differently with clarifications. Pragmatic philosophizing asks what purpose is served by the intellectual construct called "justified truth". And the pragmatic answer is that its purpose is to distinguish which truths have the greatest certainty. In short, its intent is the same as the intent of criticism: separating the truth from "fooling oneself". The relationship isn't antagonistic at its core. It's playing "good cop/bad cop" in the interrogation of truth. Criticism eliminates the unworthy candidate truths, so justification can select from the possibly infinite number of remaining creative candidates. Far from tossing out "justification" altogether, the (infamous) pragmatic definition of justification/verification is simple reliability of the candidate truth in accomplishing corresponding endeavors. One pertinent endeavor has been previously mentioned: modeling reality in ways which are isomorphic to good information.

Perhaps that answer verges on evasive wordplay. My suggestion is to roll both "relentless criticism" and a shrunken version of the notion of "philosophical justification" into the big ball of conceptual mud called "verification". By doing so, am I questioning the superior value of dynamic evolutionary criticism over static repressive dogma5? That's not my desire. Au contraire, to be a conscientious pragmatist is to be an active skeptic. I'll repeat my earlier axiom: individual bits of information cannot be meaningful nor accurate without isomorphic links to other bits. In response to this axiom, a human must probe the links of an assertion to establish its meaning and accuracy. They may judge that the links are dubious or logically incompatible; that is in the category of criticism. They may introduce creative new links, or devise ingenious experiments, to increase the estimation of confidence; that is in the category of justification. Hence, no matter what category applies to the action under discussion, critical thinking and trial-and-error are in effect.

Nevertheless, I doubt that Deutsch would find this alternative scheme to be acceptable. It's much too reminiscent of what the book calls "instrumentalism", i.e. valuing a theory solely or primarily for useful predictions and more or less disregarding its explanatory claims about reality. It differs, though. Unlike instrumentalism it judges the reality of explanations as eagerly as all statements. But explanations must pass the same methodological rules. The grand inquiry is always "How do we know?" If an explanation is real, then by what actions would we detect it? Pragmatic reality consists of the answers to these questions. "Theoretical" explanations with discernible verified effects can certainly be as real as "empirical" observations or measurements or numerical predictions. Arguably, a pragmatist could flip instrumentalism around and claim that specific theoretical explanations are more real than a single empirical item of interest, because the explanation had been verified countless times by countless experts in countless procedures but the single empirical item might come with reasonable doubts (oops, gunk on the lens, or oops, I forgot to carry the 1).

However, to someone who insists on the immutable reality of explanations, such a cure may seem worse than the disease. What's the benefit in calling explanations "real" if the cost is redefining, weakening, or stretching what we mean by "real"? After all, when a human uses an explanation to comprehend a phenomenon, their earnest motivation is to uncover the real stuff responsible for it6. They wouldn't frame it as inventing or imagining ideas for some subjective purpose. They see the explanation as having an existence independent of foolish capricious humans, including nonsense-talking pragmatists7.

And round and round it goes, for the pragmatist then responds by noting that a real existence independent of all physical or mental human activity is worthless, and even worse unsubstantiated. Moreover, to pair this insistence with a strong emphasis on fallibilism leads to the troubling consequence that explanations are independently real yet changing constantly, because explanations evolve, as Deutsch cogently recommends. So were the past and disproven explanations real? If so, then the human accumulation of the evidence, which disproved the past explanations, must have somehow shifted reality itself. If one starts to admit that the reality of explanations can vary by degree depending on verification, and that humans' current complete picture of reality depends jointly on the explanations that they actively construct and the information that they collect over time, then one may be not far off from a pragmatist.

Given his unmistakable powers of reasoning and debate, I'm sure that Deutsch could trample these thoughts at his leisure. Still, his long and deep book was thought-provoking reading, for which I'm grateful. Although I'm incredulous about whether the human progress of explanations is as all-powerful as hypothesized, his confidence is undeniably touching.


1 "Work" is intentionally left vague in order to cover the wide scope of all knowledge. Contrary to the common misrepresentation, it isn't synonymous with "profitability" or "convenience". False information could possibly "work" for some selfish purpose but not "work" for understanding reality; for instance, paranoia can be excellent at keeping you safe regardless of how incorrect it is at pinpointing the foremost motives of everyone around you.
2 Someone who had that strategy might say, "I reject your reality and substitute my own." Or less humorously, "Reality moves in step with the dictates of my beliefs. I can trust my deductions because my small simple-minded set of fundamental axioms is complete, flawless, and impervious to all criticism."
3 I have enough self-awareness to recognize that I therefore cannot claim that my own philosophical ideas, including the words in this very blog entry, are self-evidently true. All I can attempt is to convince readers of the "reasonableness" of the ideas, where "reasonableness" is a pragmatic judgment of whether the ideas "work", naturally.
4 I don't wish to seem biased. I suppose that an imaginary mana explanation could have similar isomorphisms. It could describe kinds of mana, which are capable of recombination, and it could allow proportions of mana to change in formulaic ways. Thus the mana believer could say that normal water is "two-parts ephemeral mana to one part airy mana". If explanation A has many isomorphisms to C and explanation B has many isomorphisms to C, then A is more likely to have many isomorphisms to B. "Different names for the same thing" is a folksy description of an intense isomorphism. In any case, plausibility certainly isn't tied to how "scientific" that words sound, regardless of the custom/preference for Greek/Latin. Despite its laughable sound, "'charm' is a 'flavor' of 'quark'" is a weighty statement because of its isomorphisms to experiments as well as other experimentally-verified statements.
5 Dogmas can come in endless forms, varying by the source authority and the severity of imposition. Propositions about the supernatural are some of the purest examples, but other domains aren't immune either. Sometimes the dogmas assume the innocent shape of, "Everyone knows that, so proof isn't necessary."
6 I wholeheartedly agree with Deutsch's warning against level-based prejudice of the reality of an explanation. A concise overall explanation, when correctly understood, can be as real and meaningful as hundreds of minuscule details that explain the same occurrence. That means the "real stuff" in the explanation could be relatively abstract or broad. Any time that the orchestration of many parts achieves a total effect greater than every part working independently, higher-level explanations are mandatory. I'm nothing except for a biological group of multitudinous cells, but all those microscopic facts communicate virtually nothing of significance about me.
7 Some relevant sections of the book contain excellent critiques of the Copenhagen interpretation of quantum mechanics. Deutsch is good at that.

Friday, January 13, 2012

words that authors set on Repeat

Full-length books are certain to contain repetitive words. Some of those repetitions are more noticeable than others.
  • In The Beginning of Infinity by David Deutsch, one of those words is "parochial". 
  • In Darth Plagueis by James Luceno, one of those words is "rock" as a verb, which refers to a head motion, often from side to side. Are you ready to rock? Your head?

Wednesday, November 09, 2011

Willpower and narratives

I read Willpower by Roy F. Baumeister and John Tierney. According to the described research, energy is integral to self-control. Resistance to the temptation of short-term impulses is "real" work. These systematic study findings agree with anecdotal experience: tired or hungry humans are more likely to act irritable and self-indulgent, even when the fatigue is mental. Thus good sleeping and eating routines are causes or fuel for self-control as well as effects of it.

Philosophically speaking, these overall facts on willpower add to the formidable pile of compelling evidence against the mythical "disembodied mind" (soul). Humans have startling potential for decision-making and self-denial. But aside from abnormal genes and/or painstaking training, they have limits1. They can't actually force their bodies to do whatever they wish, no matter how much perseverance stirs within their alleged supernatural "hearts". Most obviously, the underlying physicality, i.e. the brain, will shut down at some point due to simple exhaustion. Although before that happens, involuntary reactions probably will wrest the weary body away from the tyranny of conscious direction2.

Whether or not someone believes in a mystical basis for willpower, the book has practical suggestions. I imagine that my initial retort is like other know-it-all readers: "That's it? If lasting behavior modification were that straightforward, why is failure so prevalent?" However, a few of the book's propositions fit snugly into my favorite personal schema for human willpower: narratives. The more effectively that someone constructs and follows a narrative of a desired plan of action to reach a goal, the greater the chance of success. Examples:
  • The book's preference of specific details over broad intentions is a factor in a narrative's perceived level of "reality". A vague narrative is prone to treatment as a fantasy. Mushy rules or to-dos are difficult narratives to interpret as true - or as false in the case of failure.
  • One of the book's foremost points is the need for frequent monitoring and short-term milestones, combined with the willingness to be flexible and forgiving in the immediate-term when inevitable complications occur. The same applies to a narrative, which must inhabit attention for it to be a tangible guide. Narratives must often enter everyday awareness in order to be measurements and signposts for actual deeds.
  • Long-term rewards and consequences require extra emphasis in thoughts, because the competing short-term incentives are naturally louder. Good narratives are clear on the desirability of the far-off benefit, so attentiveness to a narrative substitutes a different, imagined item for the present temptation. The narrative projects the decision-maker outside the influence of the current time and place. Taste of future victory compensates for withdrawal now. Other than abstract reinforcement, this mental operation is a helpful distraction3.
  • Orderly environments and personal habits aid willpower. Messes consume mental resources, leaving less room for considering proposed narratives. In contrast, tidiness of oneself is a subconscious increase in the narrative's plausibility. Little triumphs and confidence boosters bring it within reach. "I see that I can exert control. Maybe I can carry out a hard narrative too."
  • "Precommitment" could be the willpower technique that aligns closest with narratives. What else is the essence of a narrative, if not a vision of what future-me will do (and be)? 
Perhaps the preceding are signs that I'm trying to cram the book's discoveries into a biased framework that I find attractive. I readily admit that I'm enamored with narratives as the "secret weapon" of humanity's advanced aspects. Language is a medium for narratives. Fancy plans are narratives ("Two right turns then a left"). Self-concept is a narrative. Sympathy is a narrative that features another, in which the self can be exchanged for the main character. Indeed, any interpretation of another's actions or knowledge depends on narratives4. Teachings on morality, character-building, and religion employ particularly dramatic narratives for manipulation.

Entrenched narratives are more than ideas, too, because humans act in response5. Unlike other organisms, which are dominated by fairly simple inborn drives and brains, humans incorporate surprising complexity in their decisions. Viewing themselves as part of larger narratives, their acts and roles need not exhibit complete biological/evolutionary reasonableness. The narrative mechanism enables a vertiginous third-person perspective beyond the self: "What I do here today will echo across history and inspire other trite expressions..." Angst-heavy humans are capable of envisioning the implications of a choice on the trajectory of the chooser's life story. They can be embodiments of principles. They can feel the coercion of idolizing an ideal version of themselves6. The grip of relentless narratives yields levels of human willpower which shouldn't be underestimated.

1My father once commented that part of the fun of watching Survivor is to see how long the contestants manage to act normally. As normal as the typical Survivor contestant, anyway. The strain breaks/hardens people in different ways.
2Recall the common remarks, "I don't remember giving in. Eventually my attention wandered for a moment, and it happened automatically." Some commentators have said that humans more accurately have free-won't rather than free-will. Alcohol doesn't introduce strange motives. All it needs to do is suspend rational judgment. Conscious courage is the ability to override impulsive fear. Mere fearlessness could come from deficient perception or comprehension of risks.
3Distraction is underrated. Illusionists and experimenters have demonstrated repeatedly that a sufficient distraction virtually eliminates other stimuli. Stopping to ponder a questionable option is less advantageous than minimizing it by moving on. Simply put, more time spent simultaneously contemplating and yet "fighting" a motive corresponds to more moment-by-moment opportunities to surrender.
4Specifically, the "theory of mind" presupposes that the self's mind is an adequate model for others' minds. Their point of view is obtained by putting the self's mind in their narratives. Carl's eyes are shifty. If I were Carl, shifty eyes would indicate that I'm was hiding something. So by matching the real narrative of Carl with a hypothetical narrative about myself, I assume that Carl is hiding something.
5And according to what I've previously mentioned about philosophical Pragmatism, entrenched narratives play a still deeper role. The narratives that a human has judged to "work", by whatever standard, are the tools for constructing human truth from confusing/ambiguous raw data. Moreover, numerous confrontations with reality may prompt complicated revisions and additions to patchwork narratives. Otherwise the narratives no longer would "work". ("My paranoia can accommodate the new facts quite neatly...")
6The parenting section of the book doubts the effectiveness of training self-esteem versus training willpower. I don't dispute that. In terms of an aspirational narrative about the self, the distinction is how demanding the narrative is: not only the size of the relative gap between the ideal and the "real" self but also the extremity of the ideal. Humans who think "I've reached my apex just the way I already am" are aiming at a lower target than humans who think "I'm going to try to become elite".

Thursday, November 03, 2011

to be agile is to adapt

Not too long ago, I read Adapt by Tim Harford. It's an engrossing presentation of a profound idea: beyond a particular bound of complexity, logical or top-down analysis and planning is inferior to creative or bottom-up variations and feedback. Adaptation can be indispensable. Often, humans don't know enough for other approaches to really work. They oversimplify, refuse to abandon failing plans, and force the unique aspects of "fluid" situations into obsolete or inapplicable generalizations. They're too eager to disregard the possible impact of "local" conditions. Biological evolution is the prime example of adaptation, but Harford effectively explores adaptation, or non-adaptation, in economies, armies, companies, environmental regulations, research funding, and more. Although the case studies benefit from adept narration, some go on for longer than I prefer.

Software developers have their own example. Adapting is the quintessence of Agile project management1. As explained in the book, adaptive solutions exploit 1. variation, 2. selection, and 3. survivability. Roughly speaking, variation is attempting differing answers, selection is evaluating and ranking the answers, and survivability is preventing wrong answers from inflicting fatal damage.

Agile projects have variation through refactoring and redesign while iterations proceed. Agile code is rewritten appropriately when the weaknesses of past implementations show up in real usage. Agile developers aren't "wedded" to their initial naive thoughts; they try and try again.

Agile projects have selection through frequent and raw user feedback. Unlike competing methodologies with excessive separation between developers and users, information flows freely. Directly expressed needs drive the direction of the software. The number of irrelevant or confusing features is reduced. Developers don't code whatever they wish or whatever they inaccurately guess about the users.

Agile projects have survivability through small and focused cycles. The software can't result in massive failure or waste because the cost and risk are broken up into manageable sections. Agile coaches repeat a refrain that resembles the book's statements: your analysis and design is probably at least a little bit wrong, so it's better to find out sooner and recover than to compound those inevitable flaws.

1Of course, the priority of people over process is also quintessential.

Tuesday, October 04, 2011

model dependent realism

I read The Grand Design. I'm long acquainted with much of the history and physics therein, albeit at a conceptual not mathematical level. However, I was fascinated by the description of the entire universe as a Feynman path. I can't make any knowledgeable comments on that or the M-theory stuff, of course. I couldn't help wondering if the sections on renormalization and "negative energy" would've been easier to understand with the careful and hand-held inclusion of some undergraduate-level math. That's a hard balance to strike, though. Maybe I'll try some cross-referencing with the tome that's "heavy" in several senses of the word, The Road To Reality by Penrose. I doubt the two books share the same general opinions.

Since I'm monotonous, I'm obligated to compare the book's "model dependent realism" with my interpretation of philosophical Pragmatism. I noticed many similarities. In model dependent realism, humans perceive reality through the lens of a model. In Pragmatism, humans perceive reality through the lens of subjective elements like desire, focus, analysis, synthesis, theory-building, etc. In model dependent realism, humans select models for the sake of "convenience". In Pragmatism, the convenience of thoughts about reality is explicitly tied to how well the thoughts "work" for purposes. In model dependent realism, humans replace models as they compare the accuracy by experiment. In Pragmatism, humans adjust their knowledge of truth as they actively determine which individual truths are confirmed "in practice". Most infamously, in model dependent realism, an ultimate universal model of reality might simply be impossible, except as a quilted combination of an array of limited models. Just as infamously, in Pragmatism, truth isn't a standalone all-encompassing entity, except as an evolving collection of ideas whose two coauthors are the human and their whole environment.

Wednesday, July 27, 2011

drinking game for Choices of One by Timothy Zahn

Take a swig whenever you read:
  • a form of the verb "grimace"
  • "said grimly"
  • "mentally _____"
  • a form of the verb "growl"
  • a form of the verb "twitch"
  • the reply "Point", but that's actually very rare
  • Mara Jade deflecting a blaster shot back at the shooter
  • Luke thinking about the fact that he isn't a "real" Jedi or that he's not very good at doing _____
  • Han thinking about the fact that Leia's interested
  • "Firekiln" 
  • "Threepio" or "Artoo"...psych! Those two are missing!
Review? Oh, right. I liked it a lot. It feels like a character reunion hosted by Timothy Zahn. Like almost all Star Wars books, it isn't as consistently entertaining as the Hand of Thrawn duology.

Sunday, May 23, 2010

meaning through isomorphism

Moreover, Gödel's construction revealed in a crystal-clear way that the line between "direct" and "indirect" self-reference (indeed, between direct and indirect reference, and that's even more important!) is completely blurry, because his construction pinpoints the essential role played by isomorphism (another name for coding) in the establishment of reference and meaning. Gödel's work is, to me, the most beautiful possible demonstration of how meaning emerges from and only from isomorphism, and of how any notion of "direct" meaning (i.e., codeless meaning) is incoherent. In brief, it shows that semantics is an emergent quality of complex syntax, which harks back to my earlier remark in the Post Scriptum to Chapter 1, namely: "Content is fancy form."
prelude and the problem

Months ago I finally finished reading Douglas Hofstadter's Metamagical Themas. Since it's a collection of columns about varied topics, I don't plan to comment on it with the same level of enthusiasm that I applied to I am a Strange Loop (if I were, I would've gotten around to it much sooner!). But one of its recurring ideas, also raised in Hofstadter's other books, is so fruitful that I can't resist rambling about it at excessive length: meaning through isomorphism. I'd further claim that its importance rivals and complements that of self-reference, which usually has the spotlight in commentary about Hofstadter's ideas.

The universal philosophical issue or "problem" of meaning is easily explained. It's undeniable that people experience meanings and that a meaning is a relation. One chunk of stuff "means" another chunk of stuff; noun _____ represents, defines, symbolizes, or analogizes noun _______. But how can people reconcile or harmonize this experience of meaning with a universe that, according to the best means of detection and reason, consists of pieces of matter whose interactions are consistently indifferent to relations of meaning? Does/Can meaning really exist? Assuming it does, then what is meaning, where is meaning, and how does meaning originate? I'll work my way back to this later.

isomorphisms

The preceding questions have many proposed answers. I'm convinced that Hofstadter's description of meaning through isomorphism is a pretty good one. A mathematical isomorphism has a rigorous logical definition, but in the looser sense intended here, an isomorphism is simply matching one or more parts of one aggregate with parts of another aggregate such that one or more relations between the parts of one aggregate remain valid between the matched parts in the other aggregate. In a word, relations in an isomorphism are "preserved". (In passing, note the circular definitions that an "aggregate" is a collection of "parts" and "parts" are anything in an "aggregate" collection.)

To take an elementary example, if one aggregate is the set of numbers (4, 8, 15) and the other aggregate is the set of numbers (16, 23, 42), then an isomorphism that preserves the relation "greater-than" could match 4 to 16 and 8 to 23 and 15 to 42 because 8 is greater than 4 and 23 is greater than 16, 15 is greater than 8 and 42 is greater than 23, etc. (Naturally, the relation of "subtraction" is NOT preserved since 8 - 4 = 4 and 23 - 16 = 7.)

At first glance, this may seem like dodging the question of meaning. Why should anyone care that, through a greater-than isomorphism, (4,8,15) "means" (16,23,42)? Well, that depends on the situation. Hypothetically, if someone's purpose involved the greater-than relation and he or she could more easily manipulate numbers less than 16, then that person could work on (4,8,15) and use the isomorphism to apply the results to (16,23,42). Imagine the depressing story of a pitiful calculating device that can only store its numbers with 4 bits each but its task is to find the maximum of 16, 23, 42. Still too trivial? Then ponder a comparable isomorphism between number sequences: taking the sequence 0..255, matching 0..127 to itself, and matching 128..255 to the sequence of -128..-1 . Now go read about the computer storage of signed integers to find out why this comparable isomorphism isn't a toy example at all.

Thus the basic idea is straightforward but its implications are surprisingly wide-ranging as shown by Hofstadter in his more mind-bending sections. His exemplar is the isomorphism in the incompleteness theorems between numbers and the symbols of a formal logic system, although he returns time and again to descriptions of the human capability for analogy, whether in the contexts of translation or recognition or self-image or creativity. A common thread is the logically-strange tendency to transcend by self-reference, which goes by labels like "quote", "break out", "go meta", "use-mention distinction".

applied to computers

However, apart from complicated self-reference, Hofstadter admits in his serious AI speculations that mere meaning through isomorphism remains simultaneously effortless for people yet bewildering to computer science. People can figure out an original isomorphism that works and then not rely on it beyond suitable limits, but a program can't. Whatever are the underlying mechanisms that originally evolved in the human brain for reaching the "correct" answer in a complex environment, no artificial program has quite the same ultra-flexible Frankenstein mix of (what appear from the outside to be) random walks, data mining,  feedback, decentralization, simulation, and judgment. Given its mental acts, we shouldn't be shocked by the sheer quantity of lobes and connections and structure in the brain, and maybe its operation is less a monolithic program than an entire computer packed with programs interrupting one another to get a chance at directing the body's motor functions.

On the other hand, this conspicuous lack of an AI for pragmatic isomorphisms is all too familiar to application programmers like myself. Our job is to fill the gap by the routine imposition of meaning through isomorphism. That is, we try to create a combination of algorithms and data that's an isomorphism for a specific human problem, like running payroll for a company. In a similar fashion, the total computing system is a stack of increasingly-complicated isomorphisms of the application programmer's work. As Hofstadter writes in his columns on Lisp (and properly-educated programmers know), the top-level program is compiled into an isomorphic representation, then the next level down does its own compilation, and so forth, until the original program is in the executable form appropriate for the relevant non-virtual hardware. The towering edifice is an impressive illustration of the exponential power of isomorphisms to capture and translate meaning into, ultimately, an electrical flow.

(Some technically-minded readers may be questioning which "relations" are preserved by these isomorphisms. After all, one or more parts of the "pipeline" could possibly optimize the original code in any number of ways like function in-lining, variable removal, tail calls... In this case, the relations are abstract but are in the category "the effect of the original":  order of statements that are dependent, values of constants including user-visible strings, access details of I/O performed. When relations like these in the original code aren't preserved, the pieces lower in the stack thereby fail to carry out actual isomorphisms that would express the meaning.)

applied to information theory, communications, art

Of course, meaning through isomorphism isn't the only theoretical framework around for understanding computer processing as a whole. The same claim could be made for information theory, which is throughly successful and in use every day. Fortunately, the two are compatible. Say that the communication channel's sender and receiver each have aggregates, and the message symbolically encoded and sent over the channel is an isomorphism between their aggregates. So then the symbols of the channel's message indicate each match from one aggregate's part to the other aggregate's part. Before the first symbol, the receiver is at maximum uncertainty or entropy about the isomorphism. After the first symbol and each symbol thereafter, the receiver can use knowledge of 1) the matches communicated previously, 2) its own aggregate, and 3) its own aggregate's relations between parts to make increasingly likely guesses about the remaining matches (or correct randomly sent errors on a noisy channel). In accordance with information theory, good "entropy coding" for this message would send no more bits to the receiver than are required in order for the receiver's knowledge of aggregates and relations to infer the rest of the isomorphism. The isomorphisms processed in a computer system allow for lower information entropy and therefore greater compression. The most interesting portions of a codec have the responsibility of using relations among parts of the media stream to trash or reconstruct some parts of the aggregate stream.

Given the compatibility between meaning through isomorphism and information theory, it's unsurprising that communication in general is perhaps its most natural manifestation. A language is an aggregate of spoken or written words. The universe is an aggregate of pure experience (I dare you to be more vague!). Hence a worded message is an isomorphism between language and the universe. Rather, the message is an aggregate of words that were purposefully selected in order to communicate, via the language-universe isomorphism, an aggregate of thoughts about pure experience. The relations preserved by this isomorphism are countless and endlessly varied. In the message "The quick brown fox jumps over the lazy dog", "brown" is an adjective to "fox" so the indicated fox must have the color brown in depictions, "jumps" is a verb to the subject "fox" so the indicated fox must be in the midst of a jumping action, "over" is a preposition connecting the phrase "The quick brown fox jumps" to the phrase "the lazy dog" so the indicated fox must be of higher elevation than the indicated dog. Part of the reason why computer parsing of raw human language is stymied is due to a computer's lack of a human's uncannily deep well of experiences to fuel a feedback loop between the comprehension of syntax and semantics. In practice, the nuanced syntax of sentence structure, word forms, and connectives nevertheless results in highly ambiguous statements that require worldly knowledge and/or context to disentangle.

How exactly people can decode their own words is a fiendish and glorious enigma that's convinced many speculators to tie language to the essence of humanity. Those who presume a soul frequently equate it to the explicitly verbal segments of intelligence (e.g. "confess with your lips"). It's definitely a truism that, of all earthly creatures, people have the most developed and subtle languages. They can organize their mental and social lives to levels of complexity that contrast with the simplicity of their materials. All their creations in language, art, and other domains can have symbolic depth.

applied to the distinction between form and content

These feats of abstractive composition and interpretation lead to a commonsense division between a work's "surface form" and its "meaningful content". For instance, a surface form of red is said to represent the inner content of the artist's aggression, and a milestone of any artistic genre is the point at which critics begin to tirelessly accuse its artists of betraying the genre's pioneers by producing "mindless copies" that mimic style (form) but without substance (content).

Finally I return to one of the points in the opening quote. The practical classification of an expression's qualities into form and content just described is contradicted by Hofstadter's pithy motto "content is fancy form". In his words, "[...]'content' is just a shorthand way of saying 'form as perceived by a very fancy apparatus capable of making complex and subtle distinctions and abstractions and connections to prior concepts' ". Elsewhere he writes that form and content are on a "continuum", and the gap between is "muddy". Form and content are simply the same expression evaluated on different strata or in different domains. Syntax and semantics are different in degree, not in kind. If someone isn't taken aback by this proposition then he or she might not grasp its import.

I've found it instructive to employ this perspective on John Searle's thought-provoking Chinese room argument. There's a person, completely ignorant of written Chinese, in a closed room armed with only the materials necessary to 1) receive a message in Chinese, 2) manually execute the algorithm of a program that can pass a Turing Test in Chinese, 3) send the reply generated back out of the room. Assume, as in any Turing Test, that the judge outside the room who sends the original message and receives the reply can only rely on the messages to determine whether the unknown correspondent knows Chinese. Since the algorithm in the Chinese room can pass the Turing Test by producing highly-convincing replies, isn't it true that 1) based on the replies the judge will conclude that the person in the Chinese room understands the messages and 2) the judge's conclusion is in direct contradiction to the person's actual level of knowledge of Chinese? If you grant these two points, then the Turing Test criterion for "understanding" fails to find the right answer for the person in the Chinese room. Now change the role of the Chinese room inhabitant from a person+algorithm into a computer program executing the same algorithm. Remember that the person in the Chinese room passed the Turing Test by "shuffling symbols" that had no meaning to him or her. Is it any more reasonable to think that a program that passes a Turing Test is doing anything more than "shuffling Chinese symbols" like the person in the Chinese room? The upshot of the argument is that no matter how well a message's form of linguistic symbols is processed, its content or meaning could still be unknown to the processor; understanding of form does not imply understanding of content so content cannot be form.

As I see it, the meaning-through-isomorphism interpretation leads to a disturbing viewpoint on the Chinese room argument (reminiscent of how the EPR "paradox" led to disturbing but theoretically-consistent results for quantum mechanics). I'm restricted to a single clue to deduce who in the Chinese room argument really knows the meaning/content/semantics of the message: the location of the aggregates to which the message's symbols are isomorphic. The argument postulates upfront that the person doesn't know the Chinese language. In other words, he or she doesn't have any information about the aggregate or relations of the Chinese language, but the message's symbols are parts of that unknown aggregate. Clearly no isomorphism happens in the person and so none of the intended meaning is there. I can agree that the person's lack of the necessary aggregate makes him or her clueless about the messages. But that leaves one possibility: the algorithm is the thing that understands the meaning. In the Chinese room, or indeed in any scenario alike to a Turing Test, in order for the algorithm to form convincing replies at all it must be able to decode the meaning, and to decode the meaning it must have the necessary aggregates to complete the isomorphism. Based on the usual opinion that understanding demonstrates intelligence, for the purposes of the Chinese room, the algorithm is more intelligent than the person. From the standpoint of the message's meaning, the person's participation in the communication is akin to functioning as the algorithm's tool, channel, or messenger (according the customary literary allusion, the algorithm is the person's "Cyrano de Bergerac"). When meaning occurs through isomorphism, there's no logical contradiction. The judge's guess that the person within knows Chinese is nothing more than an honest mistake. Don't blame the messenger for a passing Turing Test.

applied to philosophy and the brain

I called the idea of an intelligent algorithm "disturbing", but the delegation of various "intelligent" tasks isn't novel. Mathematical calculations were one of the first to be handed over to algorithms and devices. Then there's the long list of recommendations for sundry occurrences (do this when there's a fire, do that when someone needs resuscitation). And how much of the typical job is reducible to either rote actions or following new orders whose rationale is unknown?

The disquieting aspect of the imaginary Turing Test algorithm is the unprecedented usurping of the noblest of intellectual pursuits, understanding meaning. A traditional philosopher might declare that to be human is to understand and, furthermore, understanding is an accomplishment that can't be performed by anything else. Rocks don't understand. Plants don't understand. Animals don't understand but many are trainable. In contrast, humans experience a detailed "meaning mirror" of the universe that's symbolized in their languages. The "meaning mirror" has the name "mind" or "soul". Humans can understand the meaning of an expression by its effect on the "mind". In summary, meanings are ethereal inhabitants of minds, and only humans have minds.

Such a traditional explanation is appealing (to some people especially so) but it's complicated because it grants "first-class co-existence" to a purely mental/non-physical world. By making access to the cognitive world a special privilege of humans, it's dispiriting to the prospect of AI ever arising. It's also messy because people tend to embellish the details of the non-physical world in a multitude of opposing directions. It conflicts with the normal and productive method of the sciences, which is to find physical causes for phenomena. It outlines the existence of meanings but at numerous costs.

Dropping the traditional philosopher's explanation leaves the philosophical question of meaning unanswered...unless isomorphism is introduced in its place. Isomorphism furnishes a plausible intellectual underpinning for meaning in a solely materialistic universe. An isomorphism requires only materialistic ingredients for its elements: parts, aggregates, relations, matches.

What materials? The choices are everywhere, as boundless as creativity. Earlier, one set of materials was the software and hardware components of a computer system. In his writings Hofstadter has used DNA as a sterling example whose meaning is the proteins transcribed from it. And in keeping with the section on communication, any usable information channel is a candidate for meaning through isomorphism.

For philosophical concerns, the more relevant set of materials for isomorphism is the human brain. If we're to give up believing in our non-physical realities, we should reasonably expect a competitive brain-based theory of our mental capacities and qualia. I expect the brain's networks to be effective materials for isomorphisms. The combined excitatory and inhibitory network connections seem like prime building blocks for exquisite parts, aggregates, relations, and matches. Connections in general are implicitly essential to the definition of an isomorphism. An aggregate is parts that are connected, a relation is a connection between parts, a match is a connection between parts in separate aggregates.

One can then concede the, for lack of a better name, mind/matter isomorphism: the physical layout of the brain's network is directly responsible for all "non-physical" thoughts we feel. I don't suggest that there's a "grandmother neuron" but that the numerous neurons and even more numerous neuronal junctions, in response to the onset of the relevant stimuli, effect a mental experience of grandmother, whatever that may be. I neither suggest that one brain's network layout of isomorphisms resembles a second brain's except on a gross regional level; the variance among individuals in immediate word-association responses certainly makes a closer resemblance doubtful. I do suggest that, with sufficient prior knowledge about the isomorphism between the specific brain's network and its environment, a "scan" of the merely anatomical changes associated with the formation of a new long-term memory would enable the scanner to know with some certainty what the memory was "about". (I'm skeptical that anyone could figure out a workable procedure for it. Brain interfaces are getting better all the time, but the goal is to clumsily train the brain and the interface to work together, not to accurately read the brain's ephemeral signals.)

applied to objectivity

The proposed isomorphism between a person's brain and his or her encounters with reality puts not only the philosophical categories of "mind" and "matter" in a new light but also the categories of "objective" and "subjective" meaning. Objectively, whenever a scientist examines eroded canyons with sedimentary rock walls, maybe unearthing fossilized water-dwellers, he or she can assert the ancient history of the river that flowed there long ago. The river left traces so the river can be factually inferred. Also objectively, given the full network of a brain (and many secondary clues?), the brain's memories could be deduced in theory. The thoughts of the brain left traces so the thoughts can be factually inferred. At the time people would've called the brain's thoughts subjective, but with enough work the thoughts might as well be called objective.

Obviously the isomorphism in a human brain is incredibly dense and interwoven, which causes the complexity of the undertaking to have the magnitude of a perfectly accurate measurement and forecast of all the weather in North America. It's too hasty to proclaim it impossible, though. People have managed to translate hieroglyphics and break the Enigma code. The elusiveness of the "right" isomorphism doesn't disqualify it from discovery (well, not counting some exceptions like 1) the perfect elusiveness of "isomorphisms", like one-time pads, that by design have "matches" but preserve precisely zero relations, and 2) degradation/corruption of the aggregate's material medium).

Having seemingly decided that the derivation of meaning through isomorphism places it into the "objective" category, one could be forgiven for attempting to additionally place it into the "unambiguous and undebatable" category. In people's regular conversations, the two often go hand-in-hand; objective facts are brought in to silence the clamor of unfounded opinions. Not so for isomorphisms. In fact, an isomorphism's objective existence is why it can't have any inherent authority or precedence over other isomorphisms. Surely people can agree on any number and flavor of criteria for the selection of an isomorphism, but there's no physical coercion. (I could elaborate on how the personal selection of isomorphisms is supportive of pragmatic philosophy but I won't...)   

x + y = 815. I'm writing a message to send in a bottle, but in what language? I see a person holding two fingers against his forehead and I take for granted that he has a headache. You mention a common acquaintance by the name "Alex" and I conclude you're speaking of "Alexander" not "Alexandra". "Ambiguity" is more or less shorthand for too many isomorphism candidates to pick.

In a much more formal context, computer scientists have established ever-growing sets of problems that are proven to be solvable by the same kind of algorithm. When a fast algorithm solves any problem in the set, it could attack the rest, too. On an algorithmic basis, the problems are isomorphic. A computer scientist searching for a generalized solution to the set of problems doesn't need to "prefer" one to the rest. (He or she need not be too depressed. In most applications a "good enough" or "partial" solution is adequate.)

The "relativity" (non-preference) of objective isomorphisms is mind-blowing to me. It turns the world "inside-out". The Earth is not the center of the physical universe. Neither am I the center of the universe of meaning. After a thunderclap, the impact sets air in motion. The sound wave is one set of molecules jiggling, then the next, then the next. By moving similarly, i.e. isomorphically, aren't the air molecules transmitting the "meaning" of the thunderclap? Eventually, the air in my ear canals moves after being pushed in turn. The movement of the drum corresponds to an isomorphic shift in electrical impulses (yeah, I know I'm simplifying it). The nerve cells isomorphically react and in so doing continue to pass along the "meaning" of the thunderclap into the brain that I like to refer to as "mine". The connections in that brain isomorphically mirror my stored memories, spilling over into my language centers. "I hear thunder." In the relative terms of all these isomorphisms, who's to argue that I'm the origin of my spoken thunder message? But I may not be the terminus, either. What if my statement motivates the people around me to leap into action? Aren't their actions isomorphic to my statement? My message has made their actions meaningful. It started with a thunderclap.

Tuesday, July 22, 2008

musings on I am a Strange Loop, part 4

I am a Strange Loop is a book about consciousness that displays unashamedly the imprint of the author. Its examples come from his life. It's far from emotionally sterile, since it includes his reactions and opinions. It offers glimpses of where his ideas originated. It includes excerpts from his letters. It shows his sincerity.

It advocates vegetarianism. To his credit, the author honestly relates how his attitude has developed and also when he has and hasn't eaten accordingly. He justifies it better than most vegetarians I have met, who focus either on the notable health benefits or on reminding everyone around them that meat does, in fact, come out of slaughtered creatures, as if the embedded blood and bones weren't sufficient clues. Belief in consciousness as a strange loop is intriguingly novel fodder for the vegetarian assertion that greater similarity in form between a human and a creature implies greater similarity in the experience of existence, pain, and death. The more that a creature's capability of awareness approaches a strange loop, the closer the creature is to being conscious, i.e. closer to being like us mentally. Monkey brain is therefore even less palatable than before.

My stance, to the extent I have one on the topic, is that consciousness and strange loops aren't pertinent to the decision of vegetarianism. Pain shouldn't be inflicted recklessly, regardless of the victim's intelligence, and power to take an action doesn't excuse it. Nevertheless, consuming other creatures ("ingesting flesh", as a vegetarian might say) doesn't have to violate those maxims, in my evaluation. Killing or torturing a creature in brutal fashion would be unethical. Killing a creature for no purpose than fun or sport would be unethical, too. Excessiveness to the point of eliminating a species, no matter the "mildness" of the individual deaths, would be unethical. A person participating in the food chain, without malice, is merely natural. Color me a concerned omnivore...an omnivore who doesn't eat a lot of meat due to the nutritional hazards, which are aggregately dire.

Besides vegetarianism, the book advocates something else: Bach. A writer can extol any composer he or she wants; that's a prerogative of writing. I won't object to the suggestion that musical preference is often symptomatic of someone's sense of identity, because music is perhaps the most vivid expression of a culture and cultural identity can be instrumental (pun intended) in self-identity. I'm agreeable to the still-stronger hypothesis that the deep "shape" of someone's consciousness interlocks more firmly with some individual musical pieces than others, depending on the piece's use or abuse of rhythm, melody, tempo, tone, harmony. However, I don't accept that the amount of attraction toward particular music is reliable for gauging a consciousness on any scale of measurement. The relationship between consciousness and musical appreciation is too nuanced, too dependent on other factors--multivariate. Having said all this, I don't care much for Bach, though I enjoy isolated compositions. That's a low measure of praise, considering I enjoy isolated compositions in almost every genre I've heard.

Finally, fittingly enough, is the epilogue, "The Quandary". It's a good summation of the book's major points, but what I like (the strange loop remarked as it contemplated its past contemplations) is the admission that the concept of consciousness as a strange loop isn't immune to the perceptual gap separating inner and outer life. A consciousness convinced that it has no independent existence remains a consciousness that assumes it has independent existence! Moreover, one of the ingredients of a coherent whole of consciousness is the inability to perceive its parts. (Of course, it's rather circular, anatomically speaking, to picture the brain having a meta-nervous system devoted to sensing the activity of its neurons.) Without the unconscious and the subconscious, consciousness would be too distracted by itself to react to its surroundings, a situation which mental illness abundantly demonstrates.

Thus, all theoretical derivations of consciousness are doomed to failure in presenting a model that convincingly matches human experience, i.e. "common-sense". Scientific theories for epiphenomena and illusions tend "to ring hollow" anyway, no matter how much support the theories have. Counterintuitive theories are like complex numbers and transcendental numbers. Simple questions can have correct answers that defy expectations. Whether to respond to these answers with dismayed doubt or energized awe is the choice of the learner. Hofstadter has undoubtedly made his.

Monday, July 21, 2008

musings on I am a Strange Loop, part 3

As I was reading I am a Strange Loop, I was surprised that two aspects of consciousness weren't deeply discussed or further emphasized. The omissions may reflect a disagreement between Hofstadter and me, or the admirable goal of keeping the book short and focused. The first aspect is the importance of language. "The Elusive Apple of My 'I'" is the chapter that explores the nature and development of "I" (self-identity). Its descriptions seem plausible, but my inclination is to more explicitly tie consciousness to language. Consciousness, abstract thought, language, and sophisticated interpersonal relations all are among the most distinctive characteristics of humanity (although many species have these qualities in lesser magnitude). Interpersonal interaction is connected to language, language is connected to abstract thought, and self-identity is an abstract thought. Language's first purpose is communication, yet it has other uses. Otherwise, why would people "think out loud" or "talk to themselves"? It's a ready method for analyzing and ordering thoughts. It's commonly used to construct cognitive "feedback loops" of putting a thought into words, then reacting to the words with more words, etc. (Thank your long-suffering counselor or therapist today!) It's a way to expand one's memory with external storage, through reminders, notes, voice recorders, or just repeated muttering that marginally extends the time period of short-term memory. What's most important, it allows for fiction: discussing and inventing the nonexistent. I don't mean to disparage nonverbal abilities such as visualization and intuition, but the superior information encoding of language, predicated on a combination of rules and flexibility, has furnished humanity with the means to construct ever-higher towers of knowledge ("standing on the shoulders of giants" and whatnot).

Language doesn't trap thoughts. Language is a tool. However, language does guide thoughts and train thoughts along well-worn paths. Each time someone forms a valid statement, the same rules of language that enable it to be comprehensible conform the communicator's expressed thought: choices must be made as to subject, adjectives, verb, adverb, prepositions, etc. Thus, merely in order to communicate a headache presumes the existence of "I" (or "my", "me", "Yo", "Je"). At some earlier time, other people said "I have a headache" or asked "Do you have a headache?" or "Does your head hurt?" Consequently, when I feel pain localized in my cabeza, it's simple to swap out the subject slot with "I", and the more adept that people become at such subject-swapping the more they learn what self-identity (consciousness) is all about. "I" ("me"?) is a set of sensations, emotions, and actions appropriated from the rest of society to be able to label the individualized stuff, the subject-ive. When I'm a member of a group, one definition of "I" is "the group member whose face I don't see" (and whose hands I control, whose voice I use). If a member of a group caused a collective project to fail by slacking off on an assignment, and the consensus is that this member is designated by the name "Terry", then it's logical for Terry to make the conceptual leap to "I am a slacker", assuming Terry doesn't deploy a range of coping mechanisms. Like any other human creation, "I" is spread from one person to another, one generation to another, which is why it's curious to question the evolutionary value of consciousness: "I" is a linguistic, social epiphenomenon that was and is discovered and disseminated through joint efforts of human groups. Natural selection bore intelligence and language; consciousness was a by-product. Hypothetically, a completely solitary human wouldn't have or need "I".

Additionally, a tie between language and consciousness supports the notion that "I" could be at least partly an illusion. The illusory effect of language has been noted by philosophers who professed the persuasive axiom that philosophy's prime questions are examples of misapplying words out of meaningful context: absolutes created through questionable manipulations of real statements. For instance, pondering if "purpleness" exists despite "purple" is clearly an adjective. While the level of reality of "I" is closer to that of a theory or generalization than of a literary fabrication, its tenacity on the mind and its seemingly rock-solid existence are similar. Just as Good, Truth, and Reality are simultaneously undeniable to all and nevertheless extremely hard to define satisfactorily in concrete and absolute terms, "I" is evident and ethereal at once.

The second understated aspect of consciousness is the importance of the separate brain areas and functions whose collaboration is the content of consciousness, if not the bedrock. Neuroscience is prone to repeatedly shattering the image of the brain as a single entity. True, it's one organ in one area of the body. But this one organ isn't homogenous. The "wiring" for receiving and interpreting sensory data is both highly significant and not fully understood. Each hemisphere has its specializations. Speech centers play a starring role. The amygdala and hypothalamus are just two of the regions whose activity can dominate consciousness. Then there are the functions that are so commonplace that people tend to notice them only during failures, such as motor control related to the cerebellum.

At those times, "I" abruptly becomes atypically ineffectual, and it's harder to picture "I" as an aloof, absolute dictator of the body. Other kinds of brain damage reduce consciousness in more drastic details like personality changes, memory problems, and speaking difficulties. The book mentions Alzheimer's, which I too have observed up-close in relatives--the gradual erasure of what makes the individual unique. All these afflictions, whether temporary or permanent, present a conundrum for the perspective that consciousness is metaphysical. In my experience, people with that belief respond by alleging either that the consciousness is "trapped" down deep inside a frail body or that the consciousness has partly let go. They may also say the consciousness will be its former "self" in the afterlife. I haven't received an explanation of why or how the consciousness would accomplish its reversion to a previous state, but I'm not heartless enough to demand one.

The point is that regardless of the connection between consciousness and strange loops, the harmonious froth in the brainpan between all the parts is key to what goes on. A theme of this book is the difference between the macroscopic and microscopic ways of examining objects and object interactions; microscopic physical particles and nerve cells underlie the macroscopic items that are more meaningful to us, and consciousness is best analyzed in the macroscopic domain as an epiphenomenon. Brain tissues and regions fit in a middle tier. This middle tier has more than enough to fill its own book, so its absence is understandable.

Leftover musings on I am a Strange Loop, on the other hand, fit in a part 4 tier.