The Philosophical Zombie

Joined
Jul 24, 2005
Messages
15
Lets suppose you have an entity, say a robot, that can respond to anything in the exact same way that a human would but with one crucial difference: the entity has no awareness of what it is doing.

For example, you could put a gun to this things head, and it would say anything that human might in that situation. But it would not know that it was saying these things. It had no sense of self.

The question, is such an entity possible? One side says yes, and argue that there something intangible that humans have seperate from their brains that this entity would lack.

Others say it is impossible for such a thing to exist. If it responds in every way as if it was self aware, they say, than it IS self aware since there is no way to prove that is isn't and no difference you can point out.

This is called the Philisophical Zombie problem and it's been argued for centuries and never solved. It's also the basis for countless science fiction, usually in stories about robots or artificial intelligence.

What side do you take in this problem?
 
Robots (or the like) could never react in the same way to a human... Humans use things such as "leaps of faith", "gut instincts", ect... Robots tend to analyse, use logic... These do not make for good human responses...
 
It would be relatively simple nowadays to pre-programme in a set of responses for given situations. The trick of determining self-awareness would be to create a situation for which it did not have a pre-set response. This (IMO) would be the only way to get at the truth.

PS. Wasn't it Turing who came up with a set of questions and responses for determining exactly this a few decades ago?
 
The Master™ said:
Robots (or the like) could never react in the same way to a human... Humans use things such as "leaps of faith", "gut instincts", ect... Robots tend to analyse, use logic... These do not make for good human responses...
More and more often people are realizing that for the most part, 'gut instincts' are in fact logical - we don't normally follow the logic just jump to the conclusion but if you analize it later you will see how it is actually logical. Leaps of faith are fully illogical and therefore could actually be programmed into a computer. So, this wouldn't be a good test either.
 
Much of what looks to the rest of the world like illogical behavior is actually relentlessly logical -- it's just based on inaccurate premises, or on information that other people don't have, or on different values. But people -- sane people, anyway -- can choose whether to go on accepting or to change (or adapt) these premises in response to new experiences.

I don't know enough about artificial intelligences to say whether they're able to do that, too. Perhaps some of the rest of you will enlighten me?
 
The Philosophical Zombie is a very scary thought - if consciousness can be mimicked then how is it possible to know wether any other person or animal can is aware or not...


if the zombie is true, then for all i know i could be the only concious entity in the universe!
also leaps of faith: how can you have leaps of faith if randomness doesnt exist?
 
I don't know much about the philosophical zombie and it already has me confused but I basically believe that it is quite posible for a robot to be just as capable and similar as human in every way (provided we had another few hundred years). This is becuase the human body is a robot itself, just an organic one and our brain as complex as it is is also an organic machine. I think that if be were to create a program complex enough it could be done however there are evident set back which could be solved by further scientific advancement in miniturisation to we could fit a program of that complexity in a brain sized computer. as for a robot without self awareness well we already apparently have animal that lack self awareness from what I heard. I think it's posible however there would be certain emotions that would be imposible without self awareness such as pride.
 
true, like single celled organisms (amoeba), they couldnt possibly have consiousness thought, since they dont have a brain...


...but it's harder to tell with 'higher' organisms
 
I think that it is entirely possible to program a robot to react in any situation with a pre set responce however the robot cannot have a sense of being therefore concious thought. In essence the robot is still simply a machine with actions pre determined.
It could only react in a impartial manner where human judgement is clouded by emotions although many responces will probably be biased because the creator of the robot will have programmed in his or her theory of the reaction which would take place in the select situation.
 
Well, I'd say the robot would have an awareness of the stimuli, and send out the appropriate response. Regardless of the realistic portrayal of the response, however, it is not self-aware, anymore than my keyboard is aware of the words I'm typing regardless of the fact that a succession of keystrokes produces intelligible words. If you hold a gun to the head of a robot programmed to respond with a distressed-sounding response, the only interpretation is on our end. The robot simply carries out a set of commands, and a view of the source code is all that is necessary to prove it. Generalized behaviors and randomness are difficult to reproduce too, since computers are logical machines, so a pseudo-random reaction is necessary using seeds. It is possible that eventually a seed will be repeated and the robot's seemingly random response will be repeated as well, in the exact same manner.

I'm starting to fall asleep, so I'm going to stop now.
 
Just bad luck probably. I think you came in here just as the discussion was beginning to wane.

Don't take it personally, thread deaths are a way of life for me:D

But seeing as you've posted, here's my thoughts on your thoughts. I think that programming will become ever more sophisticated - and that will make it much more difficult to distinguish between actual sentience and pre-programmed responses which attempt to mimic the same.

This, I believe is the premise for the Turing test.
 
Actually "randomness" and other such things are not that hard to reproduce in a computer. Random generation is quite easy to program and "fuzzy" logic (i.e. logic not tightly bound by classical laws) can be handled with Heuristic Algorithms. I'm not even going to attempt to get into the computer science involved (it's way over mine and most other peoples heads). But if you want a good introduction to how such things could be modeled I would suggest Sheldon M. Ross's book "Introduction to Probability Models" or Thomas Ferguson's "Mathematical Statistics: A Decision Theoretic Approach" (don't be put off by the pretentious "academic" titles, anyone with a high school math education could easily handle those books.)

The real problem in the creation of a "strong AI" (a machine that could think like a person, often considered the "holy grail" of computer science) is not in reactions at all. Rather, it is in the creation of original and subjective thought. For example, a computer could formulate the sentence:

"That man is very tall"

pretty simply. It would be easy to take the mans height and compare it to other people's stored in the computers memory. Likewise it would be also easy to say:

"I think that (whatever) is a good idea, I would recomend it"

The computer could evaluate the logistics of the plan, compare it to given parameters and create that response (in fact, the US military uses such computers to create the best supply routes for their troops, almost totally replaceing the humans that used to do that job). However a sentiment such as:

"That man is ugly."

Would be beyond that reach of the machine. It could create the sentence, but the human perception of "ugliness" is borne of millions of years of evolution and a strong subjective/biological reaction. Something that a computer could not have. A strong AI, therefore, would not be capable of emotional FEELING although they may be able to ape an emotional RESPONSE (the classic treatment of this idea in science fiction being the HAL 9000 computer).

However, HUMAN emotion is not, as far as we know, a prerequisite for self-awareness. This also does not take into account the fact that a computer could evolve at a rate many thousands of times that of which happens in nature, experiments in self-modifying "evolving" programs (ones that can actually reprogram themselves) are being done at several universities already.

At this accelerated rate of evolution it might be possible for the AI to develop it's own idea of "emotion" that would be very different from the human sense of the word. Humans often operate under the assumption that their way of seeing the world is the only way, since we have nothing to compare it too.

It is more likely that communication with an AI would be more like a conversation with an alien than a mechanical mirror of our own views.

I personally think that the Philisophical Zombie is an impossiblity. If the thing gives all the reactions of a self-aware entity then in my mind no other criteria need to be met. What other criteria or test could there be? Some would want to talk about a "soul" but there's really never been any evidence for that.
 
ADangerousIdea said:
...respond to anything in the exact same way that a human would but with one crucial difference: the entity has no awareness...
...if one requires a robotic entity to respond without awareness, then one would need to program in a particular response for a particular stimulus, or program random responses, etc. I think HAL 9000 is a fair example of how much hardware AI would need in order to "think" like a human. Along with the potential for paranoia.
 
Isn't this basically one of those logic paradox kind of questions? Asserting the existence of a being that responds "exactly like a human" but is not self-aware is like asserting the existence of a rock so big even God couldn't lift it, or the truth of statement "This sentence is false". Humans are (presumed to be) self-aware. To respond "like a human" therefore requires self-awareness.

Also, "exactly like a human" is a little less precise than you might want in your thought experiment. Six billion humans will have six billion different reactions to having a gun pointed to their head, even though large numbers of those responses can be grouped together (e.g. pleading for mercy, insulting the gun wielder, attacking the gun wielder, running for it, passing out, etc.).

John Searle is a big proponent of the "there's something special about humans" hypothesis, but he has yet to even speculate on what that something might be. (As a philosopher and not a scientist, this is acceptable, I guess.) It can't be just the observable reaction -- the constraints of the problem imply that the philosophical zombie would pass a Turing test.

It's been established in psychology research that neural activity generate conscious perceptions only when it occurs in certain parts of the brain -- is the proposed X factor for consciousness some detail of neural biochemistry that only some neurons have? Is it a topographical feature of the connection of that neuron to others? Incidental quantum information processing in the microtubule structures (a la Roger Penrose)? Nobody has much of a clue.

What this problem ultimately does is expose how limited our understanding of consciousness is. I'll tell you one thing: The idea that animals don't have sensations and feelings like we do is just as hard to rationalize scientifically as the idea that everyone besides you is a philosophical zombie.

--Otis
 

Similar threads


Back
Top