A lot of my recent entries have been about the varied maneuvers carried out by bright, well-informed confident people in order to shield beliefs that have serious problems. These beliefs need to be shielded from reasonable doubt because they aren't supported well enough by normal means. Or, worse, they're in shrill discord with beliefs that do have better support.
One way to visualize these beliefs' shortcomings is Battleship. I'm referring to the simple, cheap, perennially popular type of games in which each player has a collection of narrow ship pieces and a private grid of spaces with little holes. First they place their nonmoving ships on their own grids. Then the players take turns calling out labelled locations on the opponent's grid. If any part of any of the opponent's ships is on that location, then that part of the ship has been "hit". It's signified by placing a red peg in that hole. If not, then the call is a "miss", which the caller signifies with a white peg. Using their pegs, they build up an increasingly informative history of their past calls. Once all the parts of all of a player's ships have hits, the opponent wins.
Each call is like a prediction about grid positions occupied by parts of the ships. That prediction is an identifiable and clear-cut repercussion of the caller's "ship beliefs" at that time. If they believe that the battleship sits in the first row starting at the left corner, then they will call location A-1. If it's declared a red-peg hit, then the belief has earned greater credence. If it's declared a white-peg miss, then the belief has lost credence. Red pegs and white pegs sort out prospective beliefs about the ship positions.
As previously stated, the principal goal linking these parts together is to eventually reveal (and sink) the opponent's ships. But what if one player's principal goal were different: to preserve for themselves, game after game after game, the belief that the battleship sits in the first row starting at the left corner? They wouldn't be enthusiastic about tracking that belief's misses, such as the games in which calling out A-1 resulted in a white peg. Staring at white pegs after misses would be counterproductive to preserving the belief. A position that continues to have no peg in it cooperates with the assumption that part of a ship might be there; a position with a white peg in it doesn't. For this strange player's different goal, tossing out the white pegs before starting is more advantageous.
Back in less frivolous domains, the identifiable and clear-cut repercussions of beliefs tend to be more nuanced. Detection is an elusive struggle. People can't call out unambiguously and obtain an all or nothing answer. Prevalent margins of uncertainty ensure that a third answer unavoidably exists, which is outside Battleship rules: indeterminate. This is the default solution for the followers who've tossed out their white pegs. Whenever they can, they'll promptly classify the pleasing results of their belief's "calls" red peg hits, not indeterminate. And they'll recall the stories to others and themselves for perhaps decades. In addition, whenever they can, they'll promptly classify the disappointing results indeterminate, not white peg misses. And they'll do nothing to rehash those stories; it helps that indeterminate results are less memorable.
Overall, the effect is curated escapism. The follower can say that they're aware of whether their beliefs are being confirmed, because after all they're looking at the calls and accumulating red pegs. At the same time they can say that their beliefs have perfect records of calls, because there are no white pegs to be seen. There are no results which they judged to be frank misses and remembered for comparison later.
Combating this strategy is a rationale of the black box analysis technique I described. The reminder is to judge a belief by what comes out of it and not by an attachment to the belief itself. If the belief's origin and individual characteristics were unknown, like an object encased by a black box, then the evaluation would happen through a consistent, unprejudiced audit of its products. In Battleship terms, this would be like making calls and placing pegs with as much impartiality as playing with an anonymous list of someone else's guesses about the ships. It's difficult, and the adjustment may be sluggish, but it's not impossible to learn to measure beliefs based on results.
The white pegs are not the opponent. Exposing the contents of the hidden grid is of more worth than refusing to see the failures of faulty concepts.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.