Sunday, March 21, 2021

restrictions may apply to 15 claims

Every once in a while I'm abruptly reminded—accidentally—of the vast differences between my materialistic naturalism and the supernatural beliefs followed by many people I know...and by me too in the past. Of course, the size of this gulf doesn't imply that I crossed it with one stupendous leap. My long journey was a sequence of one small step after another. The change was so gradual that at the end it took some additional self-evaluation to simply realize where I'd come to.

It's rare for most religious followers to closely examine us de-converted people (and listen to what we plainly say). They're much more apt to think of total outsiders as their opponents. As they see it, their beliefs are unbeatable. Anyone with a thorough understanding would be convinced. Logically then, in the case of anyone who isn't convinced, the conclusion is that their understanding is faulty or incomplete. Total outsiders are opposed because they don't grasp the full truth and the whole story. They would be followers too if only the message were communicated in the right terms and then they surrendered to its charms. 

Just by existing, de-converted insiders derail this line of reasoning. We decline to follow the beliefs that we were regularly taught for years and years. We not only learned but practiced the beliefs within a community, so our view is neither second-hand nor shallow. We were committed, yet we dropped that commitment after it didn't survive further consideration and self-honesty. Greater familiarity wasn't enough to keep us content. It contributed to our ultimate disbelief! We saw up-close that restrictions may apply to the numerous claims that we heard (sometimes via artistic forms). A brisk and incomplete list of the restrictions will emphasize that, while no single restriction to a claim could be convincing enough to overturn someone's core mentality, the sheer number piles up too high to be ignored forever. 

  • Claim: God will never abandon you. Restriction may apply: Not only may you never see any concrete sign that God continues to be a personal companion of yours, there may not have been any concrete sign that it ever was.
  • Claim: Make bold petitions to God and whatever you ask will be given to you. Restriction may apply: Your request might be ridiculous or premature, so it will be rejected for good reason. Or it might not fit into the grand unknown plan of the universe; after all, every mortal has "their proper time" to succumb to death. No matter what, you'll be left guessing about what God's reaction to the request actually was.  
  • Claim: Your beliefs will give you joy in the midst of life's troubles. Restriction may apply: For the joy to reliably trigger, you might need to spend an extended time training your brain to reflexively obsess about an almighty being, whose smile you can't see, or perhaps the reward of an afterlife, which you cannot see for yourself beforehand.
  • Claim: Jesus was an idealized version of you. Restriction may apply: Anyone who lived at that time and place, and raised in a highly different culture, didn't significantly resemble you in behavior, appearance, or general outlook.
  • Claim: God's activity in human affairs will be obvious to you. Restriction may apply: Seeing God's caring intervention everywhere you look will depend on the mental lens that you view events through. By approaching every situation with high expectations, the smallest clue that might be construed as God's fingerprint can be magnified into solid evidence.
  • Claim: God will heal the sick. Restriction may apply: Sicknesses that are vulnerable to skilled physicians will be healed by their hard work. Of course, God can still have been assumed to play an unseen role in that...somehow.   
  • Claim: God controls everything. Restriction may apply: Tragedies with no apparent meaningfulness will happen. Societies will be governed by oppressors. Religious organizations will be led by people who can be motivated in petty ways. 
  • Claim: There is one set of religious beliefs that's genuine. Restriction may apply: Religious beliefs have multiplied into a bewildering variety of sets, and subsets, and subsets of subsets. Each one contains one or more details in contradiction with the others and there's no objective method to decide among the group.
  • Claim: The single firm foundation for ethics is the unchanging commands of God. Restriction may apply: The moral stances of the commands might seem outdated. Specific examples might require contorted interpretations that attempt to explain the intended moral for the modern age. Alternatively, the multiple commands might be exchanged for the simplistic sentiment, "Maybe just try caring about someone else for a change, huh? If you do that then feel free to pretend these other commands are unimportant relics and make up the rest for yourself. Always let your conscience by your guide."
  • Claim: Every wrongdoing is forgivable thanks to the unearned mercy of God. Restriction may apply: Forgiveness is granted through the arduous task of being a bona fide follower, rather than a mere pretender who parrots the right words. This task demands the giving of time, money, and effort. Sincerity is a prerequisite. Obedience is not sufficient; you must love your spirit lord.   
  • Claim: God will provide inner strength to do right instead of wrong. Restriction may apply: Inner strength is something you must laboriously cultivate by thinking regretfully about your past evil actions (confessing helps with that), growing accustomed to strictly policing your spontaneous thoughts, and choosing a new duller lifestyle that prevents common temptations. As with followers of any ideology, the human flair for compartmentalization might lead to the outcome that someone has extreme inner strength for some repulsive evils and yet they have zero resistance to their favorite evils.
  • Claim: Official religious documents are accurate and as trustworthy as God. Restriction may apply: Careful and impartial investigation of the universe might appear to strongly disagree with official religious documents. The disagreement can be overlooked as long as the corresponding parts of the documents are classified as metaphorical and poetic. Fortunately, if the documents descended from legends, the original writers probably would've readily admitted that they couldn't know for sure whether the legends had been embellished in countless retellings. In addition, the original decision to decide what documents became official might have been a difficult struggle between competing followers, some of whom would've said that the documents that didn't become official were more accurate.  
  • Claim: Divine guidance will be provided when someone earnestly seeks it. Restriction may apply: The process of seeking might be a demanding one of prolonged prayer sessions and fasting. The guidance received might be minimal or a nonverbal feeling of "peace" about a tricky decision. Experienced followers will warn that divine guidance should be compared with other sources such as the aforementioned official religious documents, or someone's peers and authorities. As with every claim, the fulfillment of it might come from one or more people who are being moved around like uninformed pawns on God's massive four-dimensional chessboard.
  • Claim: God's presence is sensed directly during times of mass singing or praying or times of quiet contemplative solitude. Restriction may apply: The ease of sensing God's presence might vary depending on the follower's imagination, suggestibility, personality, mood, skill of visualization, and the level of distraction in their environment. It might be necessary to brush aside the fact that people can enter similar emotional states in nonreligious circumstances. Consistently getting the best results might require associating the psychological state to symbolic external cues, in the manner that Pavlov documented.
  • Claim: The part of people that provides a sense of identity and makes decisions is a nonphysical soul that persists after the body stops functioning. Restriction may apply: No satisfying answer will be offered to the classic philosophical question of how to strictly define the boundaries and interactions between the nonphysical and physical domains. Meanwhile, experts of all areas except theology manage perfectly well without the concept of a soul. (A more interesting tactic would be to reject this claim and argue the doctrine that the whole bodies of the faithful dead will be resurrected/restored prior to entering heaven in the unspecified future.)

I'm aware that none of this is revolutionary news. And it's most clearly applicable to my former culture of typical U. S. evangelical Protestantism, a religious category that's losing relevance daily through its own efforts (albeit not without the political equivalent of a kicking and screaming tantrum). Nevertheless, I have faith in the worthwhileness of a passing reminder about exactly why the "sales pitch" wore thin for many of us and still occasionally grates on us now.

Tuesday, June 09, 2020

unwinnable conditions in the game of defining free will

The sad truth is that games can slip into unwinnable conditions: game situations that cannot lead to an eventual victory. Many one-player card games routinely become unwinnable through an unlucky order of cards in a stack...or short-sighted mistakes in how the player moved the cards around. Some two-player games with multiple pieces, such as chess and checkers, become unwinnable by either player once they're left with next to nothing on the board. Adventure-style computer games, which depend on collecting and cleverly using items to escape danger, become unwinnable through losing (or overlooking) items that will be surprisingly crucial to survival.

Philosophical debates have a tendency to become unwinnable too. This happens when precious yet poorly defined ideas are somehow expected to meet clashing requirements. The result is that the corresponding debates cannot be won, if "winning" means hammering out a self-consistent version of the idea that's satisfactory to everyone in the debates. But the proper response isn't giving up. It's returning the debate to a winnable condition by dropping or reshaping some requirements. 

The idea of free will is a prime example, since discussions about it frequently end in stalemates. And it's particularly appropriate here because game-playing itself is an analogy for it. Regardless of how relatively trivial a game might be, the players carry out the widely recognized aspects of free will. They analyze the circumstances, then select from a set of actions, to attain goals. (Of course, topics have been usefully compared to games for a long time. Wittgenstein described language games. Game theory is a branch of mathematics that's constantly referenced in a range of contexts.)

Some common unwinnable conditions of the effort to define free will are worth examining. To echo the statement from earlier, the aim of doing so isn't to abandon free will but to pinpoint one or more troublesome requirements. The most fundamental of them should be attacked first: that free will needs to be capable of violating the physical laws that are followed by things that don't have free will. In a word it needs to be unabashedly miraculous. The equivalent in gameplay is spontaneously disobeying the game's rules—which is not the same as when all players agree upfront to slight modifications.

This places the debate into an unwinnable condition because the best evidence available simply hasn't uncovered this kind of violation. At the scale of the person-centered decisions customarily placed in the category of free will, the rules are firm and uncontroversial. Atoms aren't created or completely destroyed, although some decay. Energy only changes form, and it tends to become more diluted when it does. Velocity stays the same unless something acts to change it—including a velocity of zero. Objects cannot be accelerated to the speed of light. Differing electric charges are attracted. If free will doesn't follow the known rules or patterns, then it cannot be reasonably integrated to the rest of reality. And presumably it could contradict any precise definition assigned to it.

A believable idea of free will should stay within the same broad boundaries that allow for the behavior of galaxies, continents, algae, clouds, platypi, etc. Furthermore, there's an upside: reliable consistency. This is crucial for effective short-term decisions and also long-term plans. Acquired items don't vanish. Notebook pages filled with ink don't switch back to blankness. Projectiles descend at an expected speed. Food temperature doesn't suddenly diverge from the environment it's in. Arguably, without impartial rules to govern the consequences of actions, free will wouldn't be worth a lot in practice. The rules are tools as well as limits. People are participants in rule-governed existence, not spectators.

Admittedly, committing crime against physical laws is extreme. Most probably don't lump together free will and sorcery. By comparison, a seemingly moderate compromise is imagining the power to just disconnect from potential influences. Through the application of this power, decisions could be rendered perfectly independent, separated from external things, internal pressures, or the past. Decisions in this state could not possibly be manipulated by anything.

Nevertheless, this requirement's drastic solution has two problems. The first problem is that it leaves decision-making with a hollow core. It's certainly good that nothing can indirectly yank around decisions like pulling the strings of a puppet. But if the decision isn't determined by anything, then it's uninformed. It's cut off from the context that provides meaning to why action A was selected instead of B. If it's made for no deeper reason, then it's due more to chance than thoughtfulness. High unpredictability isn't synonymous with willful freedom. The problem isn't eliminated by merely using a fancy source of randomness, such as the precise probabilities of quantum mechanics. A heavily randomized decision is still the opposite of taking responsibility. It's more like the game act of rolling dice than the act of choosing to skip an optional dice roll or not. (Or it's like flipping a coin to choose to roll the dice?...)

The second problem with free will hinging on the power to disconnect is the sheer implausibility. Modern knowledge shows, in addition to the previously mentioned physical rules at the human scale, a grand web of cause and effect between the multitude of particles and energy fields of the universe. Therefore it's frankly bizarre to picture a tiny strand wandering off whenever the urge strikes. How could it possibly do that? Why should it have such authority over other strands joined to it? Breaking cause and effect at will qualifies as a superpower. Demanding it isn't a winning approach for arguing that free will is realistic.

An alternative is therefore necessary. Adaptation fits the role. It's when actions vary based on influences. The definition is vague because the actions fall into a host of categories. Adaptation might consist of taking the influence's impact and neutralizing/dampening, or amplifying, or aggressively countering, or preventing, or redirecting. In general terms it's perception followed by corresponding action. 

Importantly, adaptation in the vague sense isn't a unique characteristic of human consciousness. It appears in many things of differing complexity. Single-celled organisms react depending on events in their minute but brutal ecosystems. Computer systems jump between Wi-Fi routers. Hedgehogs roll up into balls. Metaphorically, a spring could be said to adapt, because its "actions" vary based on the stress of squeezing. Obviously, its adaptation is far different from, say, people adapting to their circumstances by revising their life insurance. The part that matters is that feedback, the primitive building block of adaptation, isn't unusual or ghostly at all.

Moreover, advanced multi-layered adapting can come quite close to gaining effective control over decisions. If multiple influences are observed, understood, and consciously managed, then the influences are mastered. No single influence dictates the decision. The decision-maker isn't metaphysically separated from influences, but they don't need to be. All the paths from influences to the decision can be, er, adapted: continuing to exist but now much more complex. Within a person, these adapted paths are literal. The signals from the influences take more complicated routes through the brain's network of cells. 

For one person, the path from stimulus to rage (or panic or gloom, etc.) could be extremely simple and predictable. For another who has adapted by noticing the first signs of oncoming rage, and who has spent significant time weighing the pluses and minuses of how they act, the path traveled by the stimulus could be twisty and stop in any of several outcomes. For the latter, the "natural" or quicker path hasn't been vaporized by psychic force. It's been circumvented through the development and activation of competing paths. And since everything is material, the process of overriding costs actual energy. It's like two crowds attempting to outshout each other. With this in mind, free will can be more of an ideal to strenuously pursue than an inborn capability. 

One last feature deserves mention. As a temporary adaptation of behavior or thought is repeated over time, it's reinforced. This is an aspect of how brains evolved to learn. And with enough reinforcement, the corresponding brain path functions as the relatively "natural" or quicker one. So adaptation may become a method of self-change and even a type of self-determination. It has the caveat that it's gradual and difficult. Uttering the magic spell "I am different" doesn't instantly engrave adaptations into the self, any more than uttering a magic spell doesn't instantly etch a design into stone. It calls for visionary thinking and persistence, a combination which few things are capable of. To become an expert at a game, nothing helps as much as the "work" of playing countless times, making good and bad moves, in order to replace beginner instincts with seasoned ones.    

Excessive focus on self-governance has a subtle risk. It might evoke a requirement that makes a definition of free will unwinnable: treating denial as too pivotal. To suggest that rejecting an impulse reflects free will, but fulfilling an impulse doesn't, runs into trouble. First, if denial isn't serving the purpose of a desired result, then it's not inherently meaningful. That purpose might be nothing beyond "proving that I'm the sort who doesn't do that" or "getting some enjoyment out of punishing myself", but it's present. Denial isn't a self-sufficient motivation. Second, the statement can be flipped. A decision-maker who's absolutely ruled by denial isn't convincingly freer than a decision-maker who's absolutely ruled by fulfillment. 

Third, some cases don't involve a strong element of denial in the outcome. Yet the decision-making process that led to fulfillment could be as rigorous as a process that led to denial. In either decision, the influences could've been thoroughly mastered (...or not). Picking an influence to obey can be a free choice. Free will isn't viewing decisions as stark intellectual puzzles, after pushy motivations have been completely drained out. Relative preferences at the very least should be factored in. If someone were playing a game, nobody would maintain that a craving for victory is incompatible with a reasonable style of play.

Clearly, a kind of free will can function without dragging along unsupported assumptions. That might not stop especially stubborn debaters from insisting on unwinnable conditions anyway, by raising a final requirement that to them seems equally clear. As they would say: for people to be truly free, they must make use of something apart from particles to make decisions. After all, a particle doesn't have the freedom to deviate from the predictions of a skilled onlooker. Predictable particles adding up to a person with unpredictable free will doesn't make common sense.

The retort won't surprise anyone who's heard this before. Despite their particles, people are in fact complicated. Particles connected in certain ways can rapidly "add up" to more unpredictability than one in isolation. Once a particle is connected to its surroundings, then predicting the particle depends on predicting the effect the surroundings will have on it...and that means predicting the movements of these surroundings. Then those surroundings in turn cannot be predicted without predicting the outer surroundings that affect the closer surroundings. The mandatory edges of the predictions keep getting pushed farther and farther. 

For instance, the full motion of a particle in one of the bones in one of the fingers of a person typing an email isn't predictable unless the next letter to be typed is predicted. The next letter depends on the word being typed. If the email is about an amusing memory gleaned from the previous day, then the word being typed depends on the person's mental reconstruction of the past. Or they may jump up from the keyboard after the doorbell rings, and then the motion of the one particle can't have been predicted without also predicting the doorbell ringer.  

The essential problem is there are too many particles interacting and too many independent motions to track. Transforming these inputs into accurate particle predictions would consume an impractical amount of space and time. And the dense simulation might start failing to match reality less than five minutes later, when the person receives a text asking for help...sent from someone on the other side of town. Sure, human free will without something apart from particles could be calculated, but it's pure fantasy. And it's irrelevant next to the immense realm of possible actions for people made of particles, as they sift through all of their intricate concerns and justifications and construct endlessly creative variations. Given that deceptively limited games have turned out to have fascinating depths of strategy, there's no need to worry about feeling restricted by the few real walls around human decisions.

(Addendum: A short while ago I read How Physics Makes Us Free by J. T. Ismael. Parts of this blog entry are almost certainly surface-level restatements of ideas I picked up in it. But the book covers interesting topics such as the formation of coherent self-voices, entropy in human experience, and the network model of causes and effects. It also teaches words like diachronic and modal and immanent, with a glossary in the back.)

Monday, September 02, 2019

mid-real

Some common questions demand surprisingly nuanced answers in practice. For instance: "How real is this thought?" My unremarkable position is that a thought's realness is equal to its connections to actions whose outcomes affect and are affected by realities. These actions may be relatively mental, such as reviewing a mathematical proof, and/or relatively passive, such as observation. But at times the action is more or less the opposite of abstract: perhaps I claim that the thought of a tall, wide, concrete wall on my near left side is real because I can lean over and extend my left arm until I touch it.

This definition can be visualized as semantic distance. How much of a leap is there between the thought and the action outcomes that back it up? Is the thought a bit of a reach...thinly stretched...tenuous? Are there many links in the chain? Is it mostly a plain description or does it sneak in assumptions and speculation? If it's a fruitful substitution of one concept for another, then what qualities are added or subtracted? Depending on the wildly varying answers to these fine-grained considerations, the semantic distance could be like an inch or 500 miles...or in-between.

The consequence is that a thought's realness must also sometimes lie between real and unreal: mid-real. To deny this possibility would be inconsistent. Yet as exotic as it seems, a mid-real thought isn't that unusual. It's not mid-real like a ghost. To start with, it could be an ordinary thought that summarizes, such as the mean age of the people at a family reunion. It's very possible that nobody's age is equal to the mean age of everybody. Nevertheless, it manages to represent the group as a whole through a method that's transparent and clear-cut.

Someone may point out that summarizing thoughts are obviously real because they're not mere fantasies. The problem with that argument is that not only fantasies but metaphorical communication in general is capable of conveying real meanings; these meanings are simply emotions and analogies rather than bare facts. The dramatic thought "I couldn't take another step" is more expressive of current foot pain frustration than the reality of complete leg exhaustion. The thought is therefore mid-real. Its obvious exaggeration is less connected to the superficial meaning of the sentence than to the the communicator's real state of mind. The semantic distance is a longer scenic journey. Grand fictional narratives can be mid-real in a loose sense also. Whether it's a talking animal fable or a dramatization of true events, the accompanying message or theme that it carries is precisely the part of it that's more real.

However, narrowing focus onto purely literal thoughts isn't a surefire escape from mid-real status. Too many complications crop up. For example, a straight line from a cause to an effect isn't always the case; many factors can affect one thing simultaneously. In these situations, the naive thought "my favorite single factor is enough to thoroughly explain the state of the complex thing" is neither strictly real nor strictly unreal. It's just mid-real because the factor is surrounded by others, and its own very real effect is partial. I'm reminded of thoughts about diet correctness. Both a person's body and the items they ingest are highly complex. It's tempting to pick a dietary factor and embrace the thought that it's the one real cause of desirable (...or undesirable) physical welfare. Then the thinker is freed from the burden of admitting that the factor's level of control might be greatly modified by the presence or absence of other factors: how the food is prepared, what else is ingested at the same time, personal sensitivities, how much of the food is eaten, and the time of day. But the topic isn't hopeless. Well-grounded (albeit boring) dietary recommendations abound. One or more minor improvements are better than none.

While the messiness of realities can lead to certain thoughts being mid-real, the messiness of actions can too. The truth is that actions' outcomes are routinely imperfect. Besides the inherent limits of all feasible actions, there might be some kind of unpredictable interference, or natural degrading of tools (our own aging sense organs qualify), or an outcome that mimics another. No matter the root causes, honesty calls for the language of probability and range. "The reality is 60% likely to be in the narrow range 300 to 800, and 95% likely to be in the wide range 50 to 1200." Regardless of the amount of precisionor the "numbers" being gut feelingsthe result is that the connected thoughts are colored mid-real. Fortunately, of course, mid-real thoughts are valuable anyway for planning. Flexible plans can be constructed to succeed based on the whole range. A mid-real all-or-nothing situation is less manageable, but knowledge of the probability is at least good information.

Lastly, thoughts can be mid-real all by themselves. Some thoughts are essentially mid-real. Due to the thought's construction, the supposedly supporting actions actually can't settle the question of its realness. The issue might stem from the thought being too vague, or self-contradictory, or accommodating of any conceivable circumstance, or demanding circumstances that probably can't ever be achieved. Mid-real thoughts of this kind may be comforting or fun to play with, but allowing them a status beyond mid-real would be inconsistent with the definition applied here. Admittedly, the thoughts' creators and followers may not mean for the thoughts to be mid-real. Maybe they haven't noticed the gradual dilution of the thought they champion; now, for whatever reason, they're continuing to treat the thought the same although it's been emptied of firm assertions. Then again, that might be the characteristic that attracts them. They cannot be shown to be conclusively wrong. The downside is that they cannot be shown to be conclusively right either.

I realize I'm obligated to confront one particular objection. What if the practical, genuine existence of mid-real thoughts is only a distraction from the main question of realness rather than a valuable clue about which answers are serious? Did I start out by skipping too casually over the terms "thought", "action", "outcome", and of course "realities"? If the subtle difficulties of mid-real thoughts were escaped, what would the ultimate source of realness be? Unfortunately, my response is to concede that I offer no escape. Those terms are indeed circular (interdependent). Thoughts are events in the brain. The judgment of connections between thoughts and actions happens in the brain but so does the judgment of which realities are in harmony with which outcomes. Thoughts are approximations of realities but realities are only represented in the brain by thoughts. ("I'm 70% sure I see the back of Bill's head across the room" is a thought.) The hypothetical actions to execute in order to verify thoughts originate as thoughts. The thoughts can't be infallibly real because the actions need to tie them to realities. Even the realities can't be infallibly real because realities are in the form of sometimes-incorrect thoughts. The actions can't be infallible because of the flaws that have been explained. No single component is enough to be an ultimate source of progress toward realness. The path requires a team effort. Though, on occasion, the teamwork might resemble a melee as realities force thoughts to change or shatter.  

Saturday, January 19, 2019

in the eye of the decoder

The saying goes that beauty is in the eye of the beholder. Well, what if a symbol's meaning is in the eye of the decoder?

Significant philosophical disagreements stem not only out of holding differing ideas but also out of holding differing degrees of interpretation of the same ideas. That's why beliefs frequently come in strong/hard versions versus weak/soft versions. Surprisingly, the distinction between the strong and weak interpretation of "meaning is in the eye of the decoder" indicates a lot about someone's whole philosophy. If two people are divided on this, they're probably divided on more too. 

The weak interpretation should be undeniable: meaning is affected by the decoder fairly often. They're part of the context, and any single symbol's perceived meaning can be greatly modified by whatever context it's in. For instance, the meaning of "boot" to UK listeners can be unlike its meaning to US listeners. Or words that are offensive to one society can be relatively less offensive in another. Or a film that enchants one viewer can be dumbfounding to another; meanwhile its writer and director might have had another intent altogether.

However, in my view—and in the view of countless other thinkers now and in the past—the relationship runs deeper. I side with the stronger interpretation: the meaning is literally in the decoder (...just not in the eye). It's an event of a decoder. This event is the meaning's real essence (locus). And it probably occurs a multitude of times in a multitude of decoders, unless the meaning truly is novel. The converse is that a symbol without a decoder cannot have meaning.

Linear B is an illuminating case. It's a puzzling and old form of writing that was discovered by archaeologists. It wasn't well-understood at first, and several insights led to success some years afterward. In order to be consistent with my view, I'm forced to assert that all the tablets of Linear B didn't have real meanings for an extended period. Bluntly put, once the original writers/readers all died there weren't meanings for centuries, until the breakthroughs that enabled scholars to translate the symbols. Of course, they had excellent reasons to expect that they could reconstruct coherent meanings eventually. The source markings had the characteristics of language elements; they repeated in lengthy sequences but not in an unchanging or a random pattern.

Despite how it appears, distinguishing "they found the meaning" and "they reconstructed a meaning" isn't pointless hair-splitting. If someone objects to the idea that the meaning itself exists purely in the decoder at the time of decoding, then they face a deluge of sensible follow-up questions. Where else is the meaning? How is it created and destroyed? Does it move or metamorphose or duplicate? Why can it be perceived differently?

Because these questions revolve around the metaphysics of the objector's alternative notion of meaning, their answers reveal whatever stuff they prefer to tack onto physical reality. Again, this is a striking contrast to simply accepting that meaning is an event of the decoder. Then everything involved can be matter alone and standard physical phenomena. The meaning consists of the state of the decoder's matter after the task of decoding has changed it.

In effect, symbols are like the steps to follow to shape the decoder into an internal arrangement that embodies the meaning. At a low level they function like pressures, sometimes quite subtle, on the motions/physics of particular segments. Having the ability to elaborately shift in response to symbols is how something qualifies to be a decoder. Even decoding is transcoding, in which the result's new code is the inner code for meaning used by the decoder's substance.

The crux, previously mentioned, is that the symbols are matter, the path that the symbols take to the decoder is a path taken by matter or energy, e.g. waves of sound, and the consequences of the received symbols in the decoder happen in matter. This overall picture has obvious appeal to people whose views leave out popular supernatural concepts such as souls and eternal realms. It's relatively less common to stubbornly combine an irreligious stance with a metaphysical understanding of meaning. One possible fusion, which echoes panpsychism, is that matter in general has a "mind property" in addition to its detectable properties.

On the other hand, regardless of the number of issues avoided when meaning doesn't have an extra-special kind of existence, the important issue of telling apart subjectivity and objectivity becomes a little problematic. How can meaning ever be objective at all if it's the subjects' matter? The challenging answer is that it's certainly not by default. Greater levels of objectivity are progressively earned through diligent work to connect meanings to objects rather than only subjects.

Thus the meanings that are most objective are precisely those that have been most thoroughly backed by such work. Ideally the full details of the work are then communicated and recorded so that everyone may judge how much objectivity has been earned. The meaning of "the nation of Suriname is north of the nation of Uruguay" is considered highly objective because recent maps are plentiful, trustworthy, and in unwavering consensus. For this reason the action of viewing a South America map verifies that the meaning is tied into objective reality, regardless of how many subjects the meaning is formed in at any moment. If a meaning can be reliably applied in relevant actions, then it's objective enough. It doesn't need a mystical external abode. The metaphorical eye of the decoder suffices.

Monday, November 19, 2018

lotto balls and snap judgments

The mindfulness meditation craze has led to a few odd misconceptions about its goals. I'm convinced that imprecise wording is partly to blame. Like other subtle mental states, mindfulness can be easier to describe by emphasizing what it's not. So teachers note that it isn't analyzing any one item in depth, and it isn't reacting emotionally—positively or negatively—to any one item. They may refer to the meditator's role as "bare awareness" of the flow of present experience. Apart from re-centering attention when it becomes entangled, the only thought process is receptive observation. And the only attitude is undisturbed neutrality. Unfortunately, enthusiastic beginners may confuse these instructions' overall intent. They may jump to the conclusion that these are absolute laws for a mindful life: thinking and judging are no-nos.

But the slightest investigation into Buddhist tradition reveals that mindfulness meditation coexists with strong reverence for reasoning and morality. Obviously, wisdom and right living are values of equal or greater importance. Given this context, the proper aim can't be to extinguish every form of mental activity that supposedly falls into the vague categories of "thinking" or "judging".

As I understand it, the target of mindfulness is more specific: lotto ball thinking. A lottery's mixing machines rapidly churn masses of numbered balls until several emerge from the chaos in single file. Brains are like mixing machines, but the objects are nerve impulses that travel and combine and fizzle out along myriad pathways. A deluge of signals comes in from the senses and the body itself. It's no surprise that surprising thoughts pop out of the cerebral cortex's vortex like lotto balls, sometimes for seemingly little reason.

Meditation helps people to recognize the existence, the extent, and the nature of lotto ball thoughts. However, I for one wouldn't claim that lotto ball thoughts are inherently useless or harmful. Some are flashes of creative brilliance. Even the most distasteful could be beneficial for discovering uncomfortable truths about self-destructive thought patterns.

The point is the ongoing insight that lotto ball thoughts just happen naturally. Meditation is practice for seeing them for what they are and then responding to them deliberately and productively. Without it, lotto ball thoughts have a greater chance of pushing and pulling the thinker in various contradictory directions, prompting them to develop a sour mood, encouraging their selfishness, distracting them with trivia, etc. The tyranny of lotto ball thoughts is the enemy; thinking isn't.

Similarly, I'd argue that mindfulness has the specific target of snap judgments. Snap judgments are either instinctive or embedded by long-term associations. They follow immediately on the heels of the judged item. They're so basic that writers frequently call them attractions or aversions. They're perceived as powerful because they're raw and deeply rooted. Needless to say, they're not obligated to make much sense.

Once more, I for one wouldn't claim that the lesson to be learned is that snap judgments are inherently incorrect. (Did I mention that I'm neither a mainstream nor secular Buddhist?) Instead, meditation is for practicing the often difficult task of refusing to act on these snap judgments or to dwell/ruminate on them. Then these will gradually fade, like the electricity-saving screen of a device that's merely stared at rather than interacted with.

The effort to defang snap judgments doesn't mean pretending that pleasant experiences aren't pleasant or that awful experiences aren't awful. It doesn't mean indifference about the world and society. Arguably, it enables sophisticated and coherent judgments to take the place of error-prone gut feelings. At the same time it assists with effectively carrying out whatever principles have been chosen. High-minded ideals can hardly be followed while someone is really ruled by the whims of the moment.

I hope it's clear that mindfulness isn't an alternative to rationality or a moral center. It's an admirable tool for pursuing both!

Friday, May 18, 2018

the column of sieves

In the midst of controversial debates, it's too common for either side to demand a "smoking gun" proof from the other. They insist on an item of evidence that's easily understood, beyond all doubt, and strikingly dramatic. They need to be impressed before they'll budge an inch. (Of course, whether or not their demand is sincere is a separate question.)

The strategy works in the heat of a debate because grand indisputable tests are relatively rare. Tests that are actually workable tend to have built-in limitations. Ethical reports of such tests attach sober probabilities to the corresponding conclusions. Imperfection is normal. Tests have holes. The genuine reality could "slip through". A favorable result might still be wrong.

Nevertheless, the big picture is more hopeful. Uninformed people often fail to grasp the total value of an array of imperfect tests. Although one test has known weaknesses in isolation, combining it with other tests can make a vast difference. If one test's holes are like the holes in a sieve (...a sieve with fewer holes than a real sieve would have...), then an array of favorable tests is like a line of sieves arranged in a column. Through this arrangement, anything that slipped through one sieve's holes would probably not slip through another's as well.

The rules of probability correspond to this analogy. As long as the tests are statistically independent, i.e. running one test doesn't affect another, the likelihood is very low that all the favorable results of the array of tests are simultaneously false. That coincidence would be unlikely. One example is five successful tests, and each success had a one in six chance of being false. Having every success be false would be comparable to getting only sixes in five rolls of a die.

Unfortunately, the convincing persuasiveness of an array of tests is a much more difficult story to tell. The focus is abstract and awkward. In place of a lone heroic test to admire, there's a team of mediocrity. It's broad and messy, like reality. Sleek certainty is easier to find in fantasies.

And the situation is generally less clear than this simplified portrayal. The mathematics are more complex and the conclusions more arguable. Perhaps the test results aren't all successes. Or opponents may suggest that the probability that a test is wrong should be greater; they may assert that it's far more flawed than it was said to be. Or, if they're more subtle, they may try to claim that the tests in the array just have identical holes ("pervasive blind spots"), so multiple tests aren't an improvement over one.

Attacks on the details would be progress compared to the alternative, though. Better that than a childish sweeping rejection of tests that can't be faultless. Perhaps the usual tests are suspect, but the recommendation is to round up all these usual suspect tests anyway. Four honest approximations reached through unconcealed methods deserve more credit than an "ultimate answer" reached through methods which are unknown or secret or dictatorial or unrepeatable.

Tuesday, February 27, 2018

verifiable definitions for everybody

It's a frequent mistake to try to elevate one culture's notions into universal laws. That's why I don't mind considering the charge that the philosophical stance I encourage is a "purely Western invention".  If it had no chance at broader relevance or appeal, I'd rather know than not.

Nevertheless, at least the core principle isn't a Western-only concept. It's binding an idea's meaning and accuracy to the "shadow" it should cast on outcomes, i.e. realities which are exposed via human actions. (These actions might be mental and/or passive, such as calculating or unbiased observing.) The important instructions to follow this principle properly aren't especially Western either: to be diligent, honest, and fair throughout the tasks of gathering up and evaluating the outcomes that supposedly reveal the corresponding idea's meaning/accuracy; to be alert to any outcomes that are equally supportive, or more supportive, of competing ideas; and most difficult of all, to recognize the absence of the idea's shadow on outcomes that really should've been affected if the idea were meaningful and accurate.

The reason I think this stance is widely applicable is because of the widely occurring human problems that it was created to address. When one person is trying to narrowly determine what another person is talking about, comparing the idea to specific actions and outcomes helps. "I'm talking about the position you would reach by traveling to latitude and longitude coordinates X and Y in decimal degrees...or the position you would see on a map by finding the intersection of them." A second problem might be trying to decide whether another person is using differing ideas to express a meaning that's close to identical. A third problem might be trying to estimate an idea's overall feasibility by estimating the feasibility of the actions associated with it. A fourth problem might be trying to deflate another person's deception (or ignorant self-delusion) through noticing that their idea lacks adequate rationales that anyone can check. Wherever and whenever there are groupings of people whose symbolic communication empowers them to refer in detail to things that aren't here right now, they're empowered to carelessly refer in detail to things that aren't so.

Actually, speaking from where I sit, it's debatable how widely this stance is firmly embraced within any of the cultures that are said to have a Western heritage. It's in a continual contest with other cultural currents, some of which have the backing of social pressure and a formidable history. The effect is that people within these cultures have also long used alternative "methods" such as superstition or the opaque rulings of unchallenged authority figures. True, the exact content of the shoddy ideas changes over time, because fashions—and blind spots—change. But the mechanism is depressingly consistent. For instance, anxiety about counteracting bad spirits gives way to anxiety about counteracting bad energy (or counteracting minute quantities of bad toxins?). I'm forced to confess that even many of the members of my own culture aren't in favor of the stance I preach.

A more general truism is at work here: sorting both ideas and cultures by a single Western/non-Western split is too coarse-grained to reliably predict people's responses. There are significant differences among all the cultures which are said to have a Western heritage. The result of these differences is that my stance is more welcomed in some of them than in others. And it's more welcomed in some subcultures more than others.

Just as receptiveness varies throughout the group of Western cultures, it varies throughout non-Western cultures as well. In the same way that everybody placed on one side cannot be assumed to be completely open to it, everybody placed on the other side cannot be assumed to be completely closed to it. Despite eager attempts both to neatly assign an idea to one side and then to emphasize a rift between the sides, large numbers of people on either side have always been willing and able to absorb the bits they like from the ideas that reach them. Voluntary exchange of useful ideas has been going on for as long as the voluntary exchange of goods has. The cultural divide might be a good representation of the accidental misunderstandings that can happen so easily...but it's hardly an impassable barrier in practice.

My expectations of finding common ground don't stop there. I suspect that "they" might find that portions of the idea feel familiar. Worthy ideas have the tendency of springing up independently in several times and places, although the triggering situations and the forms of expression are of course unique. In this sense, I highly doubt that the origins of stances like mine have always been in the cultures in the Western pigeonhole. As I stated earlier, the most abstract debates about meaning and accuracy still have natural motivations. People in non-Western cultures must have developed their own versions of the triangle of ideas/actions/outcomes...though perhaps less formally or less fully. If so then their reaction will be "Oh, you're describing a view that has a few strong resemblances to ___ in my culture" instead of "You're describing a view which is entirely alien to anything that I've known before".

The risk of focusing on culture is to miss the other levels in which opposing ways of thinking clash. Clashes at the level of culture are obviously pertinent, but so are the clashes at the levels of the individual's everyday struggles with their own thoughts. The truth is that my upbringing in a Western-categorized culture wasn't sufficient to stop me from restricting the introspective reach of this "very Western stance" for years. Else it would've clashed with the unsound beliefs that I shielded from its standards. The turning point came when I aimed it at my base assumptions; I stopped treating it as an optional tool suitable solely for limited areas of knowledge. As tough as it might be to introduce a way of thinking to someone, it's not as tough as the next challenge of convincing them to value a rigorously examined, intellectually coherent life. The end goal isn't to convert every part of their culture to be more "Western" (how boring), it's to equip them to better analyze the parts of their own many-sided cultures for themselves and to free their minds if they wish. Some Westerners like I have had the same task.

Saturday, January 06, 2018

Force-curious but not Jedi?

By springing surprise after surprise on its fanatical audience, Star Wars: The Last Jedi has triggered an avalanche of differing reactions. I for one was neither passionately against or in favor of it. My personal interest in anything related to Star Wars has been lukewarm for years. I liked it enough while I was at the theater, but I didn't have the urge to go again. I'm less disturbed by its own plot twists than I am about the seemingly slim possibilities that it left for the next movie. What will the heroes do now to save themselves and defeat their foes? What mysteries remain unsolved? Perhaps the third Star Wars movie of recent years will be subtitled "The Search for Lando".

The movie managed to stun me with a few jaw-dropping scenes, though. All of these were playing with one revolutionary concept: the Jedi voluntarily becoming extinct. I need to hastily add that the dialogue pointedly clarifies, however, that the hypothetical extinction would apply to the Jedi "religion" alone. The Force itself could never become extinct. As a result, individuals who have Force awareness and/or abilities would still be around, too. Without Jedi scholarship or apprenticeship to guide them, their belief status would be Force-curious but not Jedi. They'd be non-Jedi or "Nons" for short.

A Star Wars universe of Nons offers plenty of fuel for speculation. Rather than benefit from the findings (and mistakes!) of their predecessors, each generation of them would fumble anew at understanding elementary topics and performing novice feats. They might opt to pick and choose from a hundred contradictory understandings of the Force. With their meager knowledge, they'd try to judge for themselves whether particular actions were "light-side" or "dark-side". Their ignorance of the dangers would inevitably result in a few—maybe more than a few—using the Force for selfish aims and eventually having a destructive effect on everything around them. In addition, some of them would be motivated by their loyalty to the specific groups they identify with to use the Force purely to advance the glory of their group instead of the common (galactic) good.

Their contact with the Force would be outside of time-tested paths. They wouldn't be weighed down by the irritating restrictions of petty Jedi taboos. They'd be free of the drudgery of antiquated teachings and heavy-handed institutions or councils or elders. On religion surveys they'd check the box for "no affiliation". If asked for more details, they'd say that they're spiritual but not religious. If pestered what they're spiritual about, they'd say that they can sorta sense an invisible Force of benevolence when they reach out with their feelings. And they can lift rocks or janitorial tools through telekinesis.

I suppose that this is well-aimed at the current cultural context. My guess is that many would be open to seriously considering that informal laid-back Nons going back to the basics of transcendence would be an improvement from haughty, intrusive, inflexible, child-indoctrinating Jedi who oversee big budgets and vast initiatives in a mega-Temple. Organizations, religious or otherwise, are loathed for how they can be abused to empower terrible leaders and control the members.

That's not all. In some, this attitude also operates at a deeper philosophical level. They loathe committing fully to definite statements of their own beliefs. In the spiritual realm, and almost everywhere else, it's thought to be more acceptable for everyone to be on their separate individualized journeys. Clear and verifiable thinking is devalued. Or, worse, it's purposely shunned to ensure that no opinion can ever be viewed as more accurate than another. That's why it's optional to confidently ground one's abstract beliefs in concrete actions, facts, duties, etc.

I doubt I'm spoiling the movie by revealing that, in the end, it didn't overturn one of Star Wars' core parts. It only proposes and discussed the shocking concept of a complete handover from Jedi to Nons. Nevertheless, it shows the pluses and minuses that stand out to me when I listen to the mushy words of the spiritual but not religious. If they're always able to think whatever they wish about "spirituality", then the odds are excellent that they're merely imagining a fluid external "cause" that they fill with their wholly subjective experiences. By its nature this phantom cause couldn't have an innate form or shape to take into account. But if they object to this impolite characterization and insist that their spooky spirit is as real as the Force is within the Star Wars fictional universe, then why does It exist by vaguer rules than every other real thing? Unlike those other real things, why can't they or anyone speak as conclusively about It in ways that are compatible with neutral confirmation...or disproof?

Saturday, December 09, 2017

hunting for license

Followers of materialistic naturalism like myself have a reputation of scoffing at wishful thinking. We're pictured as having an unhealthy obsession on bare inhuman facts. Despite that, I'm well aware that one of my own ideals is closer to a wish than to a reality a lot of the time. I refer to the recommended path to accurate thoughts. It starts with collecting all available leads. After the hard work of collection comes the tricky task of dispassionately sorting and filtering the leads by trustworthiness. Once sorting and filtering are done, then the more trustworthy leads form the criteria for judging among candidate ideas. The sequence is akin to estimating a landmark's position after taking compass bearings from three separate locations, not after a single impromptu guess by eye.

If I'm reluctantly conceding that this advice isn't always put in practice, then why not? What are people doing in its place? We're all creatures who by nature avoid pain and loss, including the pain of alarming ideas and the loss of ideas that we hold. That's why many people substitute the less risky objective of seeking out the leads which would permit them to retain the ideas they cherish for reasons besides accuracy. They're after information and arguments to give them license to stay put. As commentators have remarked again and again, the longest-lasting fans of the topics or debates of religious apologetics (or the dissenting counter-apologetics) are intent on cheering their side—not on deciding to switch anytime soon.

Their word choices stand out. They ask whether they can believe X without losing respect, or whether they must believe Y. Moreover, they significantly don't ask which idea connects up to the leads with less effort than the others...or which idea introduces fewer shaky speculations than the others. The crux is that their darling idea isn't absurd and it also isn't manifestly contrary to one or more indisputable discoveries. They may still care a bit about its accuracy relative to competing ideas, but by comparison this quality is an afterthought. They're gratified as long as the idea's odds exceed a passable threshold. They can honestly envision that the idea could be valid in some sense. The metaphor isn't searching for a loophole but snatching up every usable shim to fix the loose fit of their idea within the niches that need filling. The undisguised haphazardness of it at least ensures that it's adaptable and versatile.

Of course, at root it's a product of compromise for people who are trying to navigate all the forces which push and pull at them. It soothes them about the lower priority they're consciously or unconsciously assigning to accuracy. By scraping together an adequate collection of leads to make an idea viable, they're informing everyone, including themselves, that their selection of the idea isn't unreasonable. If the pursuit of authenticity were a game, they'd be insisting that their idea isn't out of bounds.

Irritatingly, one strange outcome of their half-cured ignorance might be an overreaction of blind confidence. The brashest of them might be moved to declare that their idea is more than just allowable; it's a first-rate "alternative choice" that's as good or better than any other. By transforming it to a matter of equal preference, they can be shameless about indulging their preference.

In isolated cases it really could be. But the relevant point is that, successful or not, the strategy they used was backwards. They didn't go looking honestly for leads to the top idea. All they wanted was greater license to keep the idea they were already carrying with them while they looked. In effect, thinking in these terms motivates more than a mere bias toward noticing confirmations of their idea: they're sent on the hunt. Needless to say, they can't expect anybody else to be as enchanted by their hunt's predictable findings.

Friday, December 01, 2017

trading posts

From time to time I'm reminded that it's misguided to ask, "Why is it that faulty or unverified ideas not only survive but thrive?" In my opinion the opposite question is more appropriate: "What has prevented, or could prevent, faulty or unverified ideas from achieving even more dominance?" The latter recognizes the appalling history of errors which have seized entire populations in various eras. The errors sprout up in all the corners of culture. The sight of substandard ideas spreading like weeds isn't exceptional; it's ageless.

The explanations are ageless too. One of these is that the ideas could be sitting atop a heaping pile of spellbinding stories of dubious origin. After a story has drawn its audience toward an unreal idea, it doesn't vanish. Its audience reproduces it. It might mutate in the process. When it's packaged with similar stories, its effect multiplies. Circles of people go on to trade the stories eagerly, because when they do they're trading mutual reassurance that they're right. There's a balance at work. Possessing uncommon knowledge is thrilling, especially when it's said to be both highly valuable and "suppressed". But the impression that at least a few others subscribe to the same arcane knowledge shields each of them from the doubt that they're just fantasizing or committing an individual mistake.

As is typical, the internet intensifies this pattern of human behavior rather than creates it out of nothing. It's a newer medium, albeit with a tremendous gain in convenience and visibility compared to older forms. In the past, the feat of trading relatively obscure stories depended on printed material such as newsletters or pamphlets or rare books. Or it happened gradually through a crooked pipeline of conversations that probably distorted the "facts" more and more during the trip. Or it was on the agenda of meetings quietly organized by an interested group that had to already exist in the local area.

Or a story could be posted up somewhere for its intended audience. This method especially benefited from the internet's speed and wide availability. Obviously, a powerful computer (or huge data center full of computers) with a memorable internet address can provide a spectacular setting for modern electronic forms of these posted stories—which have ended up being called "posts". Numerous worldwide devices can connect to the published address to rapidly store and retrieve the posts. Whether the specific technology is a bulletin board system, a newsgroup, an online discussion forum, or a social media website, the result is comparable. Whole virtual communities coalesce around exchanging posts.

Undoubtedly, these innovations have had favorable uses. But these also have supplied potent petri dishes for hosting the buildup of the deceptive, apocryphal stories that boost awful ideas. So when people perform "internet research", they may trip over these. The endlessly depressing trend is for the story that's more irresistible, and/or more smoothly comprehended, to be duplicated and scattered farther. Unfortunately that story won't necessarily be the most factual one. After all, a story that isn't held back by accuracy and complex nuance is freer to take any enticing shape.

It might be finely-tuned in its own way, though. A false story that demands almost no mental effort from its audience might provoke disbelief at its easiness. It would have too close of a resemblance to a superficial guess. That's avoided by a false story that demands a nugget of sustained but not strenuous mental effort—or a touch of inspired cleverness. It more effectively gives its audience a chance of feeling smug that now they're part of the group who're smart and informed enough to know better.

I wish I knew a quick remedy for this disadvantage. I guess the superior ideas need to have superior stories, or a mass refusal to tolerate stores with sloppy substantiation needs to develop. Until then the unwary public will be as vulnerable as they've ever been to the zealous self-important traders of hollow stories and to the fictional "ideas" the stories promote.