Sunday, April 16, 2017

fudge-topped double fudge dipped in fudge

Two truisms to start with. First, finite creatures like us don't have the ability to swiftly uncover all the details of each large/complex thing or situation. However, we still need to work with such things in order to do all sorts of necessary tasks. Second, we may combat our lack of exhaustive knowledge about single cases by extracting and documenting patterns from many cases and carefully fashioning the patterns into reusable models.

I'm using "model" in a philosophically broad sense: it's an abstracted symbolic representation which is suitable for specifying or explaining or predicting. It's a sturdy set of concepts to aid the work of manipulating the information of an object. This work is done by things such as human brains or computing machines. The model feeds into them, or it guides them, or it's generated by them. It reflects a particular approach to envisioning, summarizing, and interpreting its object. Endless forms of models show up in numerous fields. Sometimes a thoroughly intangible model can nonetheless be more tangible and understandable than its "object" (a raw scatter plot?). 

Some models are sketchy or preliminary, and some are refined. Some seem almost indistinguishable from the represented object, and some seem surprising and obscure. A model might include mathematical expressions. It might include clear-cut declarations about the object's characteristic trends, causes, factors, etc. The most prestigious theories of settled science are the models that merit the most credence. But many models are adequate without reaching that rare tier. Whether a model is informal or not, its construction is often slow and painstaking; it comes together by logically analyzing all the information that can be gathered. It's unambiguous, though it might be overturned or superseded eventually. It's fit to be communicated and taught. Chances are, other models of high quality were the bedrock for it; if not, then at minimum these others aren't contradictions of it. It's double-checked and cross-checked. Its sources are identifiable.

Despite the toil that goes into it, the fact is that a typical model is probably incomplete. Comprehensive, decisive data and brilliant insights could be in short supply. Or portions of the model could be intentionally left incomplete/approximate on behalf of practical simplicity. When relevant features vary widely and chaotically from case to case, inflexible models would be expected to perfectly match no case besides the "average". Perhaps it's even possible to model the model's shortcomings, e.g. take a produced number and increase it by 15%.

For whatever reason, the model and some cases will differ to some extent...but improving the model to completely eliminate the individual differences would be infeasible. So the disparity from the model is fudged. I'm using this verb as broadly as I'm using "model". It's whenever the person applying the model decides on an ad hoc adjustment through their preferred vague mix of miscellaneous estimations, hunches, heuristics, and bendable guidelines. Hopefully they've acquired an effective mix beforehand through trial and error. (The result of fudging may be referred to as "a fudge".)

If a realm of understanding is said to be an art as much as a science, then the model is the science and the fudging is the art. In the supposed contrast of theory with practice, the model is the theory and the fudging is one part of the practice. The model is a purer ideal and the fudging is a messier manifestation of the collision with the complexity of circumstance. The model is generally transparent and the fudging is generally opaque. Importantly, a person's model is significantly easier for a second person to recreate than their fudging.

The crux is that a bit of a fudge, especially if everyone is candid about it, is neither unusual nor a serious problem. But overindulgence rapidly becomes a cause for concern. The styles of thinking that characterize normal fudging are acceptable as humble supplements to models but not as substitutes. One possible issue is reliance on implicit, unchecked assumptions. Models embody assumptions too, but model assumptions undergo frank sifting as the model is ruthlessly juxtaposed with real cases of its object.

Another issue is the temptation to cheat: work backwards every time and keep quiet about the appalling inconsistencies among the explanations offered. Someone's slippery claim of fudging their way to the exact answer in case A through simple (in retrospect) steps 1, 2, and 3 should lose credibility after they proceed to claim that they fudged their way to the altogether different exact answer in case B through simple (again, in retrospect) alternative steps 1', 2', and 3'. A model would impose a requirement of self-coherency—or impose the inescapable confession that today's model clashes with yesterday's, and the models' underlying "principles" have been whims.

Yet another issue is the natural predisposition to forget and gloss over the cases whenever fudging was mistaken. The challenge is that new memories attach through, then later are reactivated through, interconnections with prior ideas. But a mismatch implies that the fudging and the case don't have the interconnections. The outcome is that a lasting memory of the mismatch won't form naturally. The primal urge to efficiently seek out relations between ideas comes with the side effect of not expending memory to note the dull occurrence of ideas not having relations. After being picked up, unrecorded misses automatically drift out of memory, like the background noise of an average day. A rigid model counteracts this hole because it enforces conscious extra effort.

The issues with replacing modeling with too many fudges don't stop there. A sizable additional subcategory stems from the customary risk of fudging: judging between hypothetical realities by the nebulous "taste" of each. That method would be long as a person's sense of taste for reality is both finely calibrated and subjected to ongoing correction as needed. Sadly, these conditions are frequently not met. Some who imagine that their taste meets the conditions may in actuality be complacent egotists who're incapable of recognizing their taste's flaws.

A few comparisons will confirm that overrated gut feelings and "common" sense originate from personal contexts of experience and culture. Divergent experiences and cultures produce divergent gut feelings and common sense. An infallible ability to sniff out reality wouldn't emerge unless the sniffer were blessed with an infallible context. That's...extremely improbable. Someone in an earlier era might have confidently said, "My long-nurtured impression is that it's quite proper to reserve the privilege and responsibility of voting to the kind of men who own land. Everybody with common sense is acutely aware of the blatant reality that the rest of the population cannot be trusted to make tough political decisions. Opening the vote to them strikes me as foolhardy in the innermost parts of my being."

But context is far from the sole way to affect taste. Propagandists (and marketers) know that repetition is an underestimated strategy. Monotonous associations lose the taste of strangeness. Once someone has "heard a lot" about an assertion, they're more likely to recall it the next time they're figuring out what to think just by fudging. Its influence is boosted further if it's echoed by multiple channels of information. For people who sort reality by taste, its status doesn't need to achieve airtight certainty to be a worthwhile success. Success is achieving the equivocal status that there "must be something to it" because it's been reiterated. A fog of unproven yet pigheaded assertions would be too insubstantial to meaningfully revise a model, but with enough longevity it can evidently spoil or sweeten a reality's taste by a few notches.

Repetition is clumsy, though. Without a model to narrow its focus, the taste for reality is susceptible to cleverer attacks. Taste is embodied in a brain packed with attractions and aversions. The ramification is that emotional means can greatly exaggerate scarce support. A scrap in a passionately moving frame manages to alter taste very well. In the midst of weighing perspectives by impromptu fudging, the stirring one receives disproportionate attention. If the scale has been tilted masterfully, someone will virtually recoil from the competing perspectives. Gradually establishing the plausibility of a model is a burden compared to merely tugging on the psychological reins.

If distorting taste by exploiting the taster's wants seems brazen, the subtler variation is exploiting what the taster wants to believe. It may be said that someone is already half-convinced of notions that would fit snugly into their existing thoughts. The desire for a tidy outlook can be a formidable ally. It's not peculiar to favor the taste of a reality with fewer independent pieces and pieces that aren't in dischord. The more effortlessly the pieces fall into place, the better. Purposefully crafting the experience that a proposal gratifies this component of taste is like crafting the experience that it's as sound as a mathematical theorem. It will appear not only right but indisputable. Searching will immediately stop, because what other possibility could be more satisfying? ...Then again, some unexpected yet well-verified models have been valued all the more as thrilling antidotes to small-mindedness.

This extended list of weaknesses suggests that compulsive fudgers are the total opposite of model thinkers. However, the universe is complicated, so boundaries blur. To repeat from earlier, model thinkers regularly have the need to fudge the admitted limits of the models. And the reverse has been prevalent at various times and locations in human history: fudge after fudge leads to the eventual fabrication of a quasi-model. The quasi-model might contain serviceable fragments laid side by side with drivel. It might contain a combination of valid advice and invalid rationales for the advice. The quasi-model is partially tested, but the records of its testing tend to be patchy and expressed in all-or-nothing terms. It could be passed down from generation to generation in one unit, but there's uncertainty about which parts have passed or failed testing and to what degree.

Once someone does something more than fudge in regard to the quasi-model, it might develop into a legitimate model. Or, its former respectability might be torn down. The dividing line between quasi-model and model is a matter of judgment. If it's resting on fudges on top of fudges, then signs point to quasi-model.

Monday, March 27, 2017

blotted visions

To repeatedly insist that a popular belief is inaccurate is to invite an obvious follow-up question: then why is it popular at all? And this question turns out to have an abundance of answers as widely varied as the believers themselves and the effects the belief has in their lives. One answer that has grabbed my attention recently is that a belief can serve a function like an inkblot test. People can use it as raw material for representing and processing their thoughts. It evokes strong reactions, which they observe and analyze. It verbalizes and conceptualizes common features of being a typical human.

The strategy is like dropping iron filings near a magnet to distinguish the magnetic field. Thereafter, they may feel dependent on the belief playing this illuminating role for them. It becomes their ongoing lens. The well-known example is morality. Given that their own conscience has always been seen through their belief, they unimaginatively assume that the lack of the belief-lens in someone else implies a lack of conscience too. Their symbols are "powerful" influences on them because the symbols' power comes from them and their mental associations. From then on, invoking the symbols is a means of self-regulating—or of others in society manipulating them, of course. A few chants or a few bars of a familiar song can deftly adjust moods...

I'm reluctant to indulge this idea too far in the Jungian direction. Picturing these inkblot beliefs as sharply defined entities in a collective unconscious doesn't feel correct to me. I don't consider the primal brain that sophisticated. I'd rather say that some long-lived beliefs have a time-tested flair for recruiting and orchestrating inborn instincts. The beliefs can give a framework for interpreting the instincts and providing particular channels (again, I hesitate to presume too much by calling the channels "sublimation").

It hardly needs to be said that the potent narrative pushed by a belief frequently distracts from its degree of accuracy. The relevant joke among non-fiction writers is "Don't let the facts spoil a good story". Captivating tales will spread rapidly regardless of the tales' (dis)honesty. Worse, stories confined by the bounds of rigorous honesty are at a disadvantage compared to the ones that aren't. Details tend to be imprecise, complex, and dispassionate, so stories with verified details tend to be more challenging and off-putting. As condescending as it sounds, for a lot of careless people in history and right now, an uncluttered story that rouses a profound interest in them is more effective at ensnaring their loyalty than a tangled story that's authentic. And the boost in effectiveness is virtually guaranteed if corresponding interest was intentionally cultivated in them over and over by people they trust. It's worth noting that the interest might not be baldly self-gratifying; it might be interest in having a simplistic, conquerable scapegoat.

Clearly, the outcome may be positive or negative when a belief is able to stimulate people's drives and/or reflections like a carefully-constructed inkblot. Nor is the basic technique unique to one category of belief. In physics, "thought experiments" exercise analytical understanding. In philosophy, an insightful intellectual has referred to similar contrivances as "intuition pumps". Numerous mythologies and fables are expressly repeated not to convey historical accounts but to render an earnest lesson as vividly and memorably as possible. But creations that aren't as lesson-focused still could be written with the goal of kindling intriguing discussions. Creators choose to employ certain words and images which they expect to act as subtle yet meaningful shorthand.

The depth of impact ensures that subjects remain susceptible to analogous inkblots long after they discard the belief. It doesn't take much for a newer instance to be reminiscent of the old. I experienced this myself when I went to Logan a little while back (were you thinking that I'd mention Arrival instead for this topic?). In this movie, Wolverine is a protective superhuman who voluntarily undergoes bloody torture to the point of death, for the sake of people who cannot save themselves. He presents himself to be pierced in place of them. He's very old. He's emotionally remote (to say the least), but once his commitment is made it's ferocious in its determination and it doesn't expire. He's closely acquainted with pain. His body doesn't deflect bullets and sharp objects like some superheroes'. After sustaining wounds that would normally kill, he just has the power to rise up again—er, almost always. Eventually he's a substitute father figure who advises against being monstrous toward others.

Nevertheless, he's someone who knows his dangerous character faults and his many mistakes...and also someone who, after some convincing by a respected authority, seeks to do what he can to redeem himself, mitigate consequences of his existence, and repay the kindnesses he's received. He has the hope of reducing the chance that someone else will endure a life like his. He's in a battle—and the movie makes it extremely literal—against the frightening aspects of himself. He may not be "reborn as a new man", but he's prodded into behaving in changed, productive ways. He transforms from isolation and despair to a renewed mission of improving the parts of the world he decides he must.

I was surprised by the intensity of my spontaneous responses to these inkblots, like the gushing of water flowing through a rut. Temporary appearances by these latent sentiments weren't sufficient to overturn my established judgments. But I was forcefully reminded that the beliefs I dropped, despite having critical flaws of all shapes and sizes, had exploited a striking capacity to "worm in" to my sensibilities. Although I snapped myself out of it years ago, on occasion I can recognize the unabashed appeal of impulsively clinging to something that "speaks to you" and appears to be embodying "truths too deep for words"...albeit only with the precision of an inkblot.

Tuesday, February 28, 2017

surveying the chasm

Identifying with a group doesn't stop me from critiquing the attitudes and customs of some who are in "my" group. This was also the case when I identified with the religious groups of my earlier years. I despised the rampant traits of ascribing the worst motives to anyone who doesn't believe in an identical god concept, reflexively distrusting anything unfamiliar, insisting on unwavering conformity to the smallest of doctrinal trivia, and so on...and so on...

The grievance I have with a few atheists online is their pattern of communicating as if an unbridgeable chasm separates them from everyone who disagrees with them. Rather than asserting that their spirits have become holier than thou, they implicitly assert that their elevated thinking processes have, without exception, become "sounder than thou". The poor wretches on the chasm's remote side aren't like the sharp-witted people in the group. Those oafs are more or less guaranteed to suffer from confusions of all shapes and sizes. Through oppressive "faith" they're apt to adopt awful ethical principles and/or absurd statements. They might be described as objects of pity who were entangled in psychological traps during the gullibility of childhood. They do ponder about things, but they're unable to really comprehend and revise their mistakes. They don't notice self-contradictions. All expectations for them are lowered. When a low expectation is publicly met—perhaps published via a sensational, eagerly exchanged internet article—the usual arch reaction is, "No bombshell here, eh, am I right?" Commentary might feature terms like superstition, fairy tale, tooth fairy, Santa, magic, irrational, tribal, sheep, regressive, and of course hypocrite.

I recognize upfront that the unattractive tendency I purposely exaggerated isn't present in everyone who follows, tightly or loosely, my philosophies. Actually it might be mostly confined to a disproportionately loud, attention-grabbing minority. Or maybe it's more widespread but in a more moderate and tacit form. I need to watch out for when I slip into it occasionally.

Obviously it leaves a deeper impression on me because I started on the opposite side of the alleged chasm. Plus, I regularly interact with people who are "there" now. I'm motivated to mark our differences using more levels of contrast. I and a lot of other apostates know that we ourselves once appeared to live for a prolonged period on the old side even as we concealed our shifting sympathies and embryonic doubts. If there were a chasm, then aspects of us were already halfway over it, which led to us feeling like we were the odd ones.

Admittedly, there are significant numbers who haven't ever had these internal struggles to a comparable degree. Their entire selves are casually intertwined with one side. No part of them is receptive to alternatives. Staying put is as involuntary and vital as breathing. They themselves may be openly unconcerned by the prospect of chasms between them and other subcultures—they may insist on it. ("We take for granted that we're on the right track if we think and act nothing like you.") I can see how being around them often enough would entrench a chasm mindset.

Then there are the somewhat innocuous believers whose supernatural perspectives are fluid/informal, or fragmentary/unassuming, or almost totally irrelevant to their lives, or constructed by them from out of the miscellaneous sparkly bits of more complete beliefs. Each of their concrete opinions and values, evaluated purely in isolation, might bear a closer resemblance to the people who are said to be across a chasm from them, than to the radical believers whom they are said to belong with. They're strong candidates for joining together in causes in common.

Broadly speaking, I don't find it sufficient to represent a wildly varied assortment of views and people with the repugnant examples alone. I'd prefer constant acknowledgment of the challenge of making summary judgments about all the diverse paths people take to deviate from materialistic naturalism. The majority of these paths are (or derive from) the abundant products of unrestricted group-facilitated creativity, socially reinforced and embellished for generation after generation. In fact, it's difficult to validly address a solitary subcategory, Abrahamic beliefs and believers, without first imposing narrower conditions on which segments are being addressed.

I should clarify that my wish for fewer prejudicial generalizations is more about style than content. I'm not reversing my position about the other side's inaccurate notions. An error or misdeed can be called what it is. And although I'm maintaining that not all of my dissimilarities from all of that side's occupants are wide as chasms...I'd say an important gap does set the sides apart. In my reckoning, two attributes pinpoint the definitive disparity between Us and Them.

The first attribute is wary but expansive curiosity. This species of curiosity reaches out to a sweeping extent of well-grounded information. It's the willingness and hunger to draw from any source that has clear-cut credence. It's not unfiltered absorption of baseless speculation or hearsay. It's considering unlikable information without immediately rejecting it and considering likable information without immediately pronouncing it legitimate.

The second attribute is conscientious introspective honesty. This species of honesty is shown by persistently weighing the worthiness of personal thoughts, especially when the thought is dearly held. Honesty, e.g. not looking away, is essential twice: honestly examining thoughts below the superficial layers, then honestly grappling with the authentic evaluation. Depending on the person and the circumstances, they may not heed this attribute's tough demands until they're presented with the chance multiple times.

The details make the difference. I'm not proposing that curiosity and honesty are foreign to Them, only these precise forms. Or, as it was with me in the past, these forms could be operating in deceptively restrained states. In Them, expansive curiosity is prevented from being too expansive. Honest introspection is conscientiously carried out but not too conscientiously. Boundaries surround the safe territory. Some commonplace questions have whole sets of rote replies. They serve as tolerable escape valves for the inner tensions caused by nagging doubts. But unanticipated questions that cut too deeply are taboo—and some radical replies to the permitted questions are taboo.

Labeling such people on the side of Them isn't a shocking consequence of a rule that strictly ties the gap-not-chasm to the two attributes. But I'm fully aware that it relabels another group entirely: people who may agree with a great deal. The gap that's more meaningful to me is how conclusions are obtained, not on the conclusions. Concurring with me on selected subjects isn't quite enough evidence that we think alike.

It can't be assumed that the two decisive attributes are appreciated by someone who by chance has never been steered toward supernatural stances, or was actively steered away by the pressure of their in-groups. Their distaste for particular ideas may be as externally guided ("cultural") as my bygone loyalties to the exact same ideas. Or maybe they were pleased to drop the ideas because their disposition is inclined to be contrarian, nontraditional, or rebellious. Or maybe they were driven out by uncaring treatment and senseless prohibitions. These reasons and personal journeys aren't automatic disqualifications; if they still have the attributes I'm looking for then they're fine in my outlook. If not...they probably have my support anyway, but my ability to relate to them will be reduced.

Additionally, whenever people have followed unsystematic routes to proper conclusions, the chances are higher that they'll follow those routes to improper conclusions regarding other topics. Experience shows people's surprising ingenuity at harmonizing a "right" answer with plenty of "wrong" answers: examples abound in political discussions. To be correct about ____ isn't to gain universal immunity from error. As I've read again and again, if every religion vanished then people would fill the vacuum with poorly grounded beliefs of other kinds such as conspiracy theories and pseudo-medicines. Every day, lots of nonreligious people unfortunately fulfill this trite rule of thumb. (It deserves reiterating that this predicament isn't an excuse to decrease skeptical criticism of many types of religion. The excellent reason why these targets have been more frequently hit is that these beliefs have been more methodically spread, embedded, handed excessive power, and involved in one way or another with awful dehumanizing ethics.)

Yet the potential shades of gray don't stop here either. Time can be a factor. Attributes aren't necessarily permanent but are demonstrated anew by the ongoing project of reapplying them. They could wax and wane. Or they could be impaired by individualized blind spots. Someone can unintentionally fail to engage their curiosity as much as they could have or honestly pay as close attention to the underpinnings of their thoughts as they could have. The lesson is that sometimes there barely is a gap at all between the quality of justifications employed by Us and Them...much less a chasm between people whose brain functions have and haven't "ascended" to a superior enlightened plane. We remain Homo sapiens aspiring to possess the most accurate ideas we can find.

Monday, February 13, 2017

android deluge

Months ago I recalled an obscure catalyst of my gradual de-conversion: wrestling with the arguments of the "emerging church". But I recall another one too that isn't usually featured in de-conversion stories. My hazy thinking was prodded forward by a deluge of androids. I'm referring to machines intended to exhibit a gamut of person-like characteristics, from appearance to creativity to desire. At the time in 2008, the specific instances with the greatest prominence to me were in Battlestar Galactica (2004) and Terminator: The Sarah Connor Chronicles. These two aren't uniquely important—obviously so, given that they were revivals of decades-old creations. The deluge of androids started in fiction long before these two. And it certainly has continued since, across a variety of media. By my estimate it's not lessening in strength...

It hardly needs saying that I'm not claiming that imaginary android characters proved or disproved anything. The critical factor they contributed was to broaden and direct imagination. They implicitly and explicitly highlighted the standard philosophical thought-experiments. "If an android were sufficiently advanced in its duplication of human thought and behavior, would its 'mind' be like ours? If not, why not? Is there a coherent reason why it becoming sufficiently advanced is impossible?" Some of these questions have been connected to "zombies" instead of androids, but the gist is the same.

As the unreal androids kept nagging me with the questions, my reading was providing me with corresponding answers. I was busy digesting two well-grounded premises, each of which are routinely confirmed. First, the elemental ingredients of humans are no different from the elemental ingredients of non-human stuff. The human form's distinctiveness arises from intricate combinations of the ingredients at coarser levels and from the activity those combinations engage in. Second, like I said in the preceding blog entry, information is encoded in discrete arrangements of matter (and/or energy flows). Ergo the details of the matter used in the arrangement aren't relevant except for obligatory requirements on its consistency and persistence. Information is perpetually copied/translated/transformed from one arrangement of matter to another. DNA molecules house information without having consciousness. The ramification of fusing the two premises is that because hypothetical androids are made of matter like people are, they're capable of manipulating information like people. This doesn't imply that it'll be easy to construct androids that encode information with comparable subtlety.

To admit this much is to invite the next epiphany. The perspective is reversible. If they're enough like us, then we're like them. Presuming that androids' intellects can function as artificial variations of people's intellects, couldn't someone with a twisted mentality—a sufficiently advanced android maybe—regard people as the original biological templates for androids? Calling the suggestion dehumanizing misses the point. Being a "conscious machine" all along, constructed from cells in place of gizmos, doesn't subtract from our subjective experience one bit. The experience of freedom is accessible to self-virtualizing robots.

Saturday, February 11, 2017


Last time, inspired by Sean Carroll's big picture look at philosophy, I repeated the big picture which I've expressed here before. As described by information theory, ideas in the loosest sense are symbolic arrangements of matter and energy flows. Brain matter has evolved accordingly to act as a flexible channel for the reception, assembly, modification, and storage of ideas.

Countless energy-consuming actions in the brain link ideas together. The links enable the ideas to be "hints" for one another. I'm not the first to suggest the analogy of a crossword puzzle. Each's answer's written clues might be vague, but the intersections of the words are crucial clues too. A strongly certain answer reinforces (or negates) the correctness of the answers that reuse its squares. The importance of context shouldn't be underestimated.

Through these linking actions, people will inevitably identify some linked ideas, endpoints, which should be relevant in some way to external actions. By "external" I merely mean that the actions don't happen solely inside the brain. The absorption of information via eyes and ears would be enough to qualify. Actions lead to outcomes, and outcomes are deeply affected by realities. Afterward, people can judge the level of agreement between the outcomes and the endpoints. But this isn't the whole effect. Based on the already mentioned links, their revised judgments of the endpoints' accuracy should revise their judgments of the linked ideas' accuracy. Ideas, actions, and outcomes are shaped by a triangle of mutual relationships.

I recognize that this big picture will provoke complaints. It grants a disappointingly mundane status to ideas and then it pairs this demotion with a sizable role for error-prone people. Wouldn't it be preferable if ideas were said to be unchanging and independent? That way, at least a few ideas exist "out there" by themselves, apart from relying on squishy, messy humans. This alternative is to insist that ideas are sturdy things

My view is that the sturdy-thing notion of ideas does have a diminished counterpart...but only through accepting a subtle redefinition. Instead of an idea having a quality of sturdiness, it can be evaluated by the number and intensity of the convergences it's involved in with other ideas. When a swarm of small easily-checked endpoint ideas have been tested as highly accurate (facts), and all these align well with a single general idea, it's like these ideas are converging on the single idea. The single idea represents a valuable summary, trend, or explanation. Brain actions such as deduction might produce convergences as well: if several axioms and proofs yield a single idea, then it's a valuable theorem. The ultimate result, after tallying an idea's convergences, is to situate it on a relative continuum. A hub idea involved in a multitude of convergences of various kinds is precious. But an isolated idea that's diverged from is suspect. 

Unfortunately, raw convergence isn't irrefutable. It carries its own inherent risk: it isn't necessarily universal. Its scope is possibly limited, even when it's quite dominant for ideas in its scope. Survey responses gathered in Connecticut could converge to an idea, but it might differ nonetheless from the idea that survey responses gathered in British Columbia converge to. Paying close attention to scope is just a price of replacing sturdy-thing ideas with convergent ideas.

But before fixating on the perceived inadequacies of ranking ideas by convergence, my advice is to methodically take an inventory of what it gives up by comparison. An idea that has been converged to many times, in many ways, is an idea that is very likely to be converged on once more. So it's probably beneficial for planning on the outcome of future actions. An idea that hasn't contradicted high-quality ideas is an idea that is very likely to not flatly contradict additional high-quality ideas. So it's probably beneficial as a lens for comprehending proposed ideas. An idea that has succinctly captured the pertinent similarities in a series of repetitive events is an idea that is very likely to echo the pertinent similarities in upcoming events in the series. So it's probably beneficial as a prediction or model of hypothetical events in its scope.

I'd say that the list of drawbacks is looking insignificant. For most purposes, a heavily convergent idea and a sturdy-thing idea are alike. The reason is that convergence is part of the original concept of a sturdy-thing idea in practice. On the assumption that an idea itself is a sturdy thing, then ideas/actions/outcomes would be expected to converge on it. The difference is whether convergence is interpreted as secondary to the idea or interpreted as defining its actual extent. Particular actions can't distinguish between the two interpretations. Perhaps the situation is reminiscent of a (positive) bank account balance. The account's owner can take the action of withdrawing currency from the bank account no matter what the account "really" consists of. For withdrawals the bank account is like a stack of currency in a locked drawer—although the equivalency doesn't work at a failing bank.

The prospect of agreeing to humble ideas could spur the forgivable question, "If not ideas, then what is considered sturdy?" And the answer is lots and lots of real stuff. The milk in my refrigerator is a sturdy thing. My idea that the milk has soured isn't. This idea is linked to the endpoint idea that in the near future I open the milk container, hold it close to my nostrils, inhale deeply, and experience a sensation of odor. The idea of the sour milk is linked to more endpoint ideas such as requesting that someone else sniff so I can watch their reactions. Depending on the outcomes of these endpoints, the idea of the sour milk might be a convergent idea or not. The milk's reality is gratifyingly sturdy. It affects the amount of convergence which my ideas about it have. The same may be asserted about a more "existential" idea about the milk: is it still in the refrigerator, or did some obnoxious household member empty it without telling me? My ideas about the milk, presently occurring in my brain, don't dictate whether the milk is now elsewhere. The actions I take won't imply that I'm finding the idea that the milk is elsewhere but that I'm thinking the ideas associated with realizing that I'm not finding the milk.

If ideas regarding soured milk seem far too frivolous, Sean Carroll's writing contains a fitting candidate which is definitely not. His "Core Theory" is an immensely convergent collection of ideas. Moreover, as he painstakingly explains, its confirmed scope is immensely broad. People's typical lives are within it. In effect researchers and engineers are rechecking it repeatedly as they act. It's not a sturdy thing...but nonetheless we're metaphorically leaning on it all the time.

Saturday, January 28, 2017

swapping pictures

It's a bottomless source of ironic amusement that intellectual justifications for religious beliefs usually appeal most to the beliefs' current adherents. They relish hearing their cherished ideas defended and reconfirmed by multitalented communicators. And...I suppose I do too. That's probably why I took the time to read Sean Carroll's The Big Picture: On the Origins of Life, Meaning, and the Universe Itself and enjoyed most of it. I wish it all the popular or critical success that it can get.

With that in mind, it's no surprise that the book's major points weren't that groundbreaking or earth-shattering to me. His Big Picture is in harmony with mine. Nevertheless I appreciated Carroll's articulate and organized delivery as well as the specifics he laid out. I already knew about some of these supporting details and arguments—it's not my first exposure to these topics—but I learned some too. Obviously he can apply more physics knowledge to the Big Picture than I can, similar to how a neuroscientist can apply more neuroscience knowledge, or a philosopher can apply more philosophy.

As I see it, his model of "poetic naturalism" is consistent with my (poorly-named) "Pragmatism-ish". He proposes that there are diverse ways of conceptualizing reality. This diversity should be accepted but only on the overriding condition that each one can be mapped onto another without contradiction. Throughout these ways, ideas should be gauged with likelihoods that adjust appropriately as more samples of reality are taken. Likelihoods that aren't inherently 0 (logical impossibility) or 100% (logical certitude) shouldn't reach these extremes, yet likelihoods can and do reach values that are close to either pole.

Likewise in Pragmatism-ish, I've proposed that ideas, actions, and reality are in a triangle of relationships. Each of the three shapes/restricts/informs the other two. (I'm using the word "idea" in the most inclusive sense, so it might be a perception, concept, statement, hypothesis...) People perform the mental action of determining that if an idea A is likelier than not then idea B is likelier than not, if idea B is likelier than not then idea C is likelier than not, etc. These connections form a web (or network) of ideas. Eventually the web extends to ideas which may be termed "endpoints": ideas that should be expected to, with a particular frequency, match particular outcomes of particular actions really performed.

Once people verify these matches, or fail to, they can take the further action of judging what some of the ideas in the web mean as well as the ideas' accuracy. To probe a supposedly meaningful and accurate idea is to question how it's ultimately grounded. What is it connected to within the web, and what relevant endpoints have passed or failed fair tests? I've sometimes referred to measuring an idea's meaning by its "verified implications". A folksy version, which is so short that it's vulnerable to numerous willful misunderstandings, is that truth needs to works for a living.

Like Carroll, I would say that ideas containing an element of subjective experience can be valid as long as those ideas are kept in their correct place in the web alongside other more objective ideas. Then the limitation of subjective experiences is always evident: the experiences are events that happened in one subject's body (including their brain). The idea is still an expression of something real if it's understood to occur as a movement of the matter in that body.

Complexity is inevitable when the sifting process is conducted with the proper care and labor. There are many possible cases. Ideas with "tighter" connections to verified endpoint ideas merit more confidence than ideas with looser connections. On the flipside, ideas with no connections merit little. These freestanding ideas, which may be "freestanding" from the rest of the web because they blatantly clash with well-grounded ideas, are like when Carroll's statements in one domain don't map onto other domains. Another case is that an idea has meager meaningfulness because it connects equally well with opposite outcomes, so that in effect it's asserting nothing of consequence. Then there's the case of multiple different ideas connecting to the same endpoints, so that there's some rationale for claiming that the ideas share a meaning. Skilled translators watch for this kind of subtlety.

The abstraction of a web of ideas has a resemblance to Carroll's vivid "planets of belief". I admire his metaphor. It should be spread. The suggestion is that, within an intellectually honest curious person, compatible beliefs gravitate together to form a planet. Combining incompatible beliefs results in a tension-filled unstable planet. Outside influences can affect the planet's stability causing chain reactions and tectonic rearrangements. Under some circumstances, it can be prodded into breaking apart and reforming into a novel planet.

Some could object that Carroll ventured too far outside of his designated area, and he should've left topics beyond physics alone. My response would be that his readers should know better than to assume that he or anyone is able to cover the Big Picture comprehensively in so few pages. By necessity it's an overview or a taxonomy. Readers with greater interest will be doing their own follow-up reading anyway.

Sunday, January 08, 2017

opposing forces

Barnabas: If you don't mind, I'd like to jump now into the real purpose of my visit. I've heard about the...unusual ideas you've been spreading around.
Kyle: The Jedi Way.
B: Have you thought about how it compares to Christianity? Why does the Jedi Way make more sense to you?
K: It just does. I could argue for it using most of the arguments that you might have for Christianity's believability.
B: Doesn't it bother you that the stories behind the Jedi Way are fictional?
K: The amount of fiction in the Bible hasn't been a fatal problem for Christianity. I know there's been a ton of debate about how much of the Bible is factual or how much of it is metaphorical myths. What if the writers of the stories behind the Jedi Way were still "inspired" by the Force to insert certain ideas, even if they themselves thought they were writing fiction alone?  And neither of us were there when the events took place. Who are we to say that none of it took place, and not in any form? Maybe the stories contain some exaggerations and mistakes, but the events are told accurately, by and large.
B: Okay, then put aside whether the little details are fiction and stick to the major ideas. Lots of people have observed striking similarities to isolated elements of Taoism and Buddhism. Isn't the lack of originality suspicious?
K: No, it fits the typical pattern. Religions come about as offshoots of other religions. Each one takes cues from precedent. Do I even need to bring up Christianity's origin—
B: Look, be reasonable, the stories we're discussing are full of mystical powers. Why aren't all the believers of the Jedi Way showing these off?
K: Maybe the flashy powers were intended only for that time period. And maybe we don't have the same kind of effective belief that the people in the story did. The figures in the stories demonstrate that it's not necessary to have superhuman abilities in order to trust in the Force, be mutual allies with it, and refer to it in conversation.
B: If the Jedi Way is more true, why isn't it believed in by more people? I mean, compared to Christianity?
K: Popularity isn't an absolute proof. Worldwide, Christianity is claimed by less than half. Obviously there are countries all over the world in which the majority belief isn't Christianity. Christians must agree with me that culture and social pressure can be used to make "false" beliefs dominant.
B: Okay, but the Jedi Way group is so tiny that there isn't any official authority over it. Who decides what it is? Isn't it up to personal whim and invention?
K: It's based on the stories we already mentioned. It doesn't have an "official" authority, but Christianity doesn't have an "official" authority either. All of the traditional global religions are divided up into bits and pieces, and the bits and pieces have separate conflicting authorities. There may be one Bible, but there's constant fighting over how to interpret it, and over which parts are important.
B: Here's something: "one Bible", you said. The Bible is the Bible because of a deliberate process that canonized some documents but pronounced others to be heretical. Who decides which stories to refer to in the Jedi Way?
K: There are canon rules. I won't bore you with the details. But the point is that not every story that's ever been published is of equal rank in the canon. Some stories override others. Reconciling apparent contradictions is treated like a pastime. And again like the Bible, certain ideas within a canon story still lead to puzzles and controversy anyway. So-called "midi-chlorians"...
B: You're implying that a huge authoritative role is being played by corporations. Their primary goals are profit-seeking and self-preservation. That's unsatisfying, isn't it?
K: As a Christian, I'm guessing that you're not purely bothered by massive organizations with rich budgets. Or by the buying and selling of related merchandise. The conscious goal of an organization doesn't need to be the Jedi Way—like how I said earlier that the conscious goal of the story writers didn't need to be non-fiction. If we seriously believe that the Force is supreme, then the Force can use organizations to serve its purpose, no matter what the organization is pursuing. Christians say the same about God's usage of the actions of unbelievers. Also, to reiterate, of course no authority is able to dictate what every individual believes about the Jedi Way. Consider how often individual Christians loudly disagree with the rulings made by the supposed "hierarchies" over them.
B: Wouldn't you say that a few important subjects are overlooked, though? What is there for you to assert about ethics?
K: The ethical code preaches compassion, peace, knowledge, democracy, mediation, the greater good, wise judgment. The broad principles aren't unique to Christianity. And Christians differ about the best applications of their principles. They argue about what a "real" Christian should act like. Someone who sees their ethics as rooted in the Jedi Way is no worse off.
B: What hope would you offer to someone who's worried about their past evil actions? What does someone do to improve themselves? Why would they?
K: There's a light path and a dark path, and we have light and dark sides in us too. The dark is quicker and easier and unreflective, but the light is evident when someone looks more calmly at the big picture. The light is chosen because it's the light.
B: So I should assume no rewards in an afterlife? No afterlife whatsoever?
K: Under normal circumstances, nobody lives forever. They return to something grander than their bodies: the Force. Anyone who lives past death only does it as part of the Force itself. This shouldn't be thought of as scary. Life and the Force are bound together. It's like coming home. Personal identity is meant to end eventually.
B: That brings up something else. Isn't there a soul?
K: Yes, there are souls. There's more to the universe than crude matter. Souls are bound to the Force like life is, and souls can receive whisper-like intuitions of guidance. Some feel this more than others.
B: I'll admit that believers like you may feel something, but they're in error about what they feel.
K: No, I think you will find that it is you who are mistaken. People with various beliefs feel the existence of something colossal beyond their everyday experiences. Some are Christian, many are not, and a few are sympathetic to the Jedi Way. In any case, these feelings don't prove a single viewpoint above the rest. I could as easily say that a big gathering of Christians, working together, kindles the movement of the Force. Or that the Force may be strong in specific sacred locations.
B: Well then, focus on other personal experiences of God's power. Unexplained medical recoveries, highly suggestive coincidences, sudden rescues, surprise acts of needed charity, and on and on.
K: I don't know if you realize this, but Christians tend to be the only ones who think those occurrences are convincing enough. If we're saying that there's a hidden cause coming to our aid, why couldn't it be the Force, not the Christian God? What would you say about the times that I ask the Force for help, and then something good happens?
B: I seem to be wasting my time here. You know, in the end, doesn't your set of beliefs
K: Sitting on the other side from you, let me reply that the appearance of hokeyness depends greatly on a certain point of view.