Imagine a smart robot that can interact with its environment in complicated ways akin to a human. This robot is so smart that its mental model of the world is at a similar level of sophistication. Its senses, goals, and calculations all are different from a human's, but nevertheless it's more than capable of devising plans of action or correcting itself based on past mistakes, for instance. It also can theorize about the future.
Moreover, the clever robot can employ these abilities to perform a great trick: it can project its own thoughts and actions. That is to say, it has a perfectly accurate internal concept of its operation that it can apply to hypothetical data to determine what it will think and do. After successfully doing this once, it recognizes the value of incorporating self-prediction into its long-term considerations. It proceeds to run the self-prediction task more and more frequently, as data comes in. It's executing its program as part of its program. In effect, it's self-virtualizing.
But now the story takes another twist. Sooner or later, the self-virtualizing robot discovers that the computed action its "virtualized self" will take is suboptimal, or that the computed answer its virtualized self will produce is incomplete. It duly notes and accounts for this new information, thereby shifting its current and future thoughts and actions accordingly, as surely as the sum of 4 and 6 differs from the sum of 8 and 3 . Yet doesn't this mean that the robot perfectly predicted itself wrongly? Is this possible? Through bypassing and modifying its programming to decide differently, does this robot have as much freedom to choose as any human?
I think so. I'm convinced that the phenomenon that we experience as "choice" isn't a peculiar loophole in causality but instead a complex interplay of factors such as emotion (drives), reason, and creativity. The complexity is the first reason people assume it to be causeless, i.e. free. Second, the factors change within each person, sometimes gradually, sometimes abruptly (e.g. "the last straw" or psychoactive substances). Third, awareness of one or more factors can function as a factor--feelings about feelings, perceptions about perceptions. Fourth, people can ponder ideals, in a fact-value distinction. Fifth, they want to believe that they are in control. Sixth, they want to believe that their motives have superior subtlety.
If the self-virtualizing robot sufficiently mastered language to claim it could act independently of its programming, how would anyone convince it otherwise?