ADangerousIdea
Member
- Joined
- Jul 24, 2005
- Messages
- 15
Lets suppose you have an entity, say a robot, that can respond to anything in the exact same way that a human would but with one crucial difference: the entity has no awareness of what it is doing.
For example, you could put a gun to this things head, and it would say anything that human might in that situation. But it would not know that it was saying these things. It had no sense of self.
The question, is such an entity possible? One side says yes, and argue that there something intangible that humans have seperate from their brains that this entity would lack.
Others say it is impossible for such a thing to exist. If it responds in every way as if it was self aware, they say, than it IS self aware since there is no way to prove that is isn't and no difference you can point out.
This is called the Philisophical Zombie problem and it's been argued for centuries and never solved. It's also the basis for countless science fiction, usually in stories about robots or artificial intelligence.
What side do you take in this problem?
For example, you could put a gun to this things head, and it would say anything that human might in that situation. But it would not know that it was saying these things. It had no sense of self.
The question, is such an entity possible? One side says yes, and argue that there something intangible that humans have seperate from their brains that this entity would lack.
Others say it is impossible for such a thing to exist. If it responds in every way as if it was self aware, they say, than it IS self aware since there is no way to prove that is isn't and no difference you can point out.
This is called the Philisophical Zombie problem and it's been argued for centuries and never solved. It's also the basis for countless science fiction, usually in stories about robots or artificial intelligence.
What side do you take in this problem?