Friday, June 04, 2010

a dialogue about the Chinese Room and meaning through isomorphism

I could easily envision objections to the ideas laid out in the longish post about meaning through isomorphism. To address them, I'll rip a page from Hofstadter's playbook and present a dialogue. (Yes, I'm fully aware that philosophical dialogues are not in any sense original to Hofstadter.)

Soulum: The concept of meaning through isomorphism is inadequate. For instance, you miss the whole point of the Chinese Room argument when you straightforwardly conclude that the "Chinese Turing Test algorithm" understands Chinese rather than the person! You're supposed to realize that since the algorithm can be executed without understanding Chinese, it's nonsense to equate the algorithm with understanding Chinese. It's a reductio ad absurdum of the whole idea of a valid Turing Test, because understanding is more than a symbol-shuffling algorithm.
Isolder: You're right, perhaps I miss the whole point. Instead, let's consider a different situation in order to illuminate precisely in what ways understanding is more than a symbol-shuffling algorithm. Say we have two intelligences trying to learn Chinese for the first time, one of whom is human and the other is non-human. As you just said, the human is capable of understanding but the non-human is only capable of "symbol manipulation". The Chinese teacher is talented enough to communicate to the human in the natural language he or she knows already and to the non-human via all the symbols it knows already. To the human, the teacher says something like "X is the Chinese symbol for Y". To the non-human, the teacher types something like "X := Y". After each lesson, how should the teacher test for the students' understanding of Chinese?
Soulum: Isn't it obvious to anyone who's taken a foreign language course? The test could include a wide variety of strategies. The test could ask questions in the known language to prompt for answers in Chinese; it could have statements in the known language to be translated into Chinese; it could have Chinese symbols to be translated into the known language; it could have an open-ended question in the known language but to be answered using only Chinese.
Isolder: OK. Assume both students pass with perfect scores. Now since they both passed the test, given that the human learned Chinese through the bridge of a natural language while the non-human learned Chinese through the bridge of "mere symbols", is it fair to assert that after the lesson, the human understands Chinese but the non-human doesn't? If not, how is the human's learning different from the non-human's learning?
Soulum: Simply put, the human has awareness of the world and the human condition. If the teacher asks "What is the symbol for the substance in which people swim?" then the human can reply with the symbol for "water" but the non-human can only shuffle around its symbols for "people" and "swim" and then output "insufficient data". The non-human can't run some parsing rules to start with "people" and "swim" and produce "water".
Isolder: But by asking about swimming, you've started assuming knowledge that the non-human doesn't have. You're inserting your bias into the test questions. It's like asking a typical American what sound a kookaburra makes. You may as well flip the bias around and ask both a human and a tax-return program what the cutoff is for the alternate minimum tax.
Soulum: Point taken. However, you're still missing the thrust of my reasoning. The human's understanding is always superior to that of the non-human because the human has experiences. The human can know the meanings of the symbols for "yellow" and "gold" and thereby synthesize the statement "gold is yellow" even though he or she might never have been told that gold is yellow.
Isolder: I'll gladly concede that experiences give added meaning to symbols. Yet the task you describe is no different than the others. I happen to have a handy program on my computer for working with images. One of its functions is the ability to select all regions of an image that have a specified color. If I load an image of gold into my program and instruct it to select all yellow regions of the image, won't its selections include all the gold pieces? By doing so, hasn't the program figured out that gold is yellow?
Soulum: Regardless, it won't pass a Turing Test to that effect. Try typing the sentence "What substance in the image have you selected?" and see if the program can do what a preschooler can.
Isolder: You're confusing the issue. The program's designed to be neither an image recognizer nor a natural-language processing program. All the experiences you describe that yield greater meaning to human words are in principle just more information. As information, the experiences could in principle be fed into a non-human like so many punch-cards. Human brains happen to have input devices in the form of senses, but the sensations' same information could be conveyed via any number of isomorphisms. And there's no obstacle to the isomorphism being digital rather than analog and binary rather than decimal. Granted, one second of total human experience is truly a fire-hose deluge of information, but like a human the program could use heuristics to discard much of it.
Soulum: I'm not confusing anything. Fundamentally speaking, humans don't discard but creatively distill their many experiences, what you called the fire-hose of information, into relevant knowledge. In contrast, your non-human symbol-shufflers, your programs, cannot accomplish any task other than the original, which was laid out by the programmer. And some smart guys have proven that the programs are even unable to check their own work!
Isolder: You're on pretty shaky ground when you claim that humans are mentally flexible and therefore aren't "designed". On what basis do humans distill the information into knowledge? According to common sense, the distilled information is the information that matters, but why does some information matter and some doesn't? In relatively primitive terms, information matters to humans if it aids in the avoidance of pain and the pursuit of sustenance and shelter and companionship. In short, survival. Evolution is the inherently-pragmatic "programmer of humans". Humans process information differently than non-humans due to different "goals" and different "programmers". Thus abstract symbol manipulation is natural to non-humans and unnatural to humans. The effort to invent non-humans that can mimic activities like vision and autonomous movement continues to achieve greater success over time, but frankly evolution has had much more time and chances. It's difficult to catch up.
Soulum: Statements like those frustrate me during conversations with people like you. It's too simplistic to analyze human behavior as if it were animal behavior. From the time of prehistory, humans developed enormously complex cultures of societies, beliefs, languages, artworks, and technologies. Sure, the infantile aren't fully educated and integrated until after many years, but by the time they've grown they're able to contribute through countless means. At its peak human intellect far surpasses other animals.
Isolder: Indeed, humans have done incredibly diverse things. The capture of meaning through isomorphism is a prime ingredient. Humans can experience the emotional ups and downs within a fictional story. They have a "suspension of disbelief" by which the story triggers isomorphic reactions. An unreal protagonist yields mental meanings that are real. The isomorphisms of complex culture extend "humanity" to surprising symbolic ends. At the same time, humans perform their isomorphic feats of understanding by stacking one isomorphism on another - turtles all the way down. When someone comments that "no one understands quantum mechanics", he or she probably means that quantum mechanics has an "isomorphism gap" of insufficient metaphors. There's a turtle missing, and as a result humans experience a distressing cognitive void.
Soulum: You speak as if knowledge were constructed from the bottom-up. Isn't it self-evident that some truths are correct despite a lack of proof? I haven't inferred my morality. I haven't needed to experiment to confirm that I exist. The compulsion of an airtight logical deduction surely isn't a physical force. Your isomorphisms notwithstanding, I know I have a soul of reason and through it I can make contact with a solid realm that's independent of whatever happens.
Isolder: Hmm. I believe there are isomorphisms behind those meanings, too, but I'll leave that discussion for later.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.