Quote from: Deuterium on Apr 17, 2012, 03:42:33 PM
Quote from: LarsVader on Apr 17, 2012, 02:34:48 PM
Him appearing/reacting emotional doesn't necesserly mean that he does feel emotions.
I always found this (the unemotional android) to be a completely ridiculous "trope" that has been sustained in the sci-fi genre. We are expected to believe that basically every characteristic of human consciousness is realized in these futuristic, advanced Artificial Intelligences:
Cognition
Self-awareness
Self-reflection
Introspection
Intentionality
Perceptual recognition and awareness that other beings are also individual, conscious agents
Empathy
Yet, for no good reason (except that it helps the narrative), A.I. / androids in sci-fi are generally denied the ability to experience internal emotional states. Why? My guess is that this is the easiest and simpliest way for sci-fi writers to represent a clear distinction between the android character and the human character. It is a simplistic literary crutch to signify the "Other".
However, an A.I. intelligence that features all the attributes (see list above) of human consciousness, IMHO would also be expected to "feel" and perceive different internal emotional states. The denial of emotion is especially problematic when the A.I. clearly exhibits both empathy and the recognition of consciousness in others.
Of course, the simple resolution to this question is to sweep the issue under the proverbial rug, with the explanation that the A.I. programming somehow prevents or precludes emotional states. Another similar argument is that the A.I. is constrained by some hypothetical emotional inhibitor. Yet, IMHO, this is an entirely unsatisfactory resolution. In some sense, a fully conscious A.I. has to operate beyond (or transcend) any deterministic "program"...otherwise it couldn't be considered truly conscious.
To me, it seems that emotions are something that humans had to learn how to
control properly to operate at the most efficient levels and achieve the greatest scientific discoveries. The very nature of science demands the utmost in unbiased, logical observations and records. Emotions are, naturally, detrimental to this process, but there are lots of other things they provide which help science in many ways, least of which being the understanding that comes from dealing with said emotions.
It stands to reason that emotions could be eliminated entirely if these other factors were (or even could be) taken into account. But you would have to either discern or guess at what unquantifiable attributes you could be sacrificing if you created a being with no emotional response. Creativity, I'm sure, would suffer, but that's just the tip of the proverbial iceberg.
David seems simply programmed to understand, display, and relate to emotions, but this is Boolean algebra, which we've seen referenced several times in the viral videos. The fact that he doesn't "feel" them means to me that they aren't generated internally. They are simply cause and effect, exterior reactions to relate to humans, but whether or not this makes them truly "artificial" is getting in to Turing territory.
From an outside perspective, David's emotions could pass all of the mental tests that make them "real."