# Something to Think About



## Drachir (Nov 5, 2009)

www.AskDrEldritch.com -- The Webcomic!


----------



## Karn Maeshalanadae (Nov 5, 2009)

I see where you're going with this, Drachir, and while I'm not paranoid about it on a Terminator or Matrix level, nonetheless one does have to wonder just how far technology is going to go, and what it will be capable of by the time the human race gets into a near-apocalyptic situation.....


----------



## Drachir (Nov 6, 2009)

Manarion said:


> I see where you're going with this, Drachir, and while I'm not paranoid about it on a Terminator or Matrix level, nonetheless one does have to wonder just how far technology is going to go, and what it will be capable of by the time the human race gets into a near-apocalyptic situation.....




Actually I might not be going in the direction you think.  One of the great questions facing humanity is species extinction.  There is no particular reason for homo sapiens to outlast the dinosaurs, especially given the number of cosmic events that could wipe us out (giant asteroids, bursts of gamma rays from supernova, and so on).  However, an artificial intelligence endowed with human qualities (except perhaps the tendency toward mindless self destruction) would be a way for humanity to live on.  

Let's look at it this way; in the future an AI is created that is truly human in almost all aspects, so much so that it would be indistinguishable from humans in appearance.  It would have human emotions, human intelligence, human creativity, and so on.  This AI would be able to self-replicate, be imbued with a conscience, and view itself as human.   However, it would have certain advantages over organic humans in that it would never age, be virtually indestructible, and perfectly suited for making interstellar voyages that would allow pseudo-humans to spread throughout the galaxy, thus ensuring the survival of humanity as least in terms of what most people consider human.  This idea has been explored  by writers such as Issac Asimov, and in films like Blade Runner and Star Trek, and of course, by people like the great Alan Turing.  
Would a robot that looks like a human, thinks like a human, and acts like a human, be human?  It is sort of like the old observation about the duck. 
It is interesting to note that given the rapid advances in computer technology it is very like that a truly aware AI will exist before the end of this century.  It will then be up to humanity to decide if this new form of human intelligence is really human.  

I see this form of humanity as a continuation of the human species in mechanical form and not as the evil menace that will destroy it.


----------



## Karn Maeshalanadae (Nov 6, 2009)

I honestly don't see it that way. Those with the power to bring about humanistic AI are the ones IN power, and they're bent more towards war and destruction than they are towards peace and creation. Robotic soldiers are far more likely to pop up than robotic doctors or robotic teachers.

I honestly feel that if we outlast any possible natural event by the time we perfect cyborgs, they'll be the ones to destroy us, not save us. And I use the word "us" loosely, in that you're describing scraps of metal and wires as "human".

I, for one, honestly believe that any continuation of truly intelligent AI that has the capabilities of performing acts upon their own with human emotion and conscience will USE that human emotion and conscience-the way we humans are using it now. Negatively.


----------



## Drachir (Nov 6, 2009)

Manarion:
It appears that you have a much more apocalyptic view of this sort of evolution than I do.  However, you are missing the point I was trying to make.  If we succeed in creating a truly sentient being, capable of replicating all aspects of humanity in a non-organic form, then it would in all sense of the word be human.  In other words it would not wipe us out, as it would be us.  Also, following the theme of evolutionary advancement it should be possible to create a being that would be free of the murderous attitudes that infect the human race.  As such it would be more likely to protect us than destroy us.  After all, there really is no point in creating a more perfect being simply to create an advanced form of Frankenstein's monster.  It is worth looking at the way creations such as the character Data were treated in Star Trek.  Data was in no way a threat to humanity; instead he longed to become human.  That is the way I visualize future AI, not as a curse, but as a blessing.


----------



## Karn Maeshalanadae (Nov 6, 2009)

It would not be human as it would not be natural. The homosapien species is one that has naturally evolved over a million years without being the metallic result of a formerly dominant species upon this planet, which is what these sentientbots would be. They would mimic humans, and quite possibly become a dominant species, but they would not be truly "alive" in the sense of the word and therefore not be human. Self-replication isn't the only criteria of being human or animal. There's also birth/death cycles, gestation periods, the need for food, water, and in humans' cases shelter and clothing.


But most importantly, every truly alive animal and plant on this planet and any living species on any other planet are made up of organic natural cells. Not even viruses are considered to be "truly alive" because they are simply made up of incomplete genetic coding, not of so much as one complete natural, living cell.

So unless these sentientbots would be made up of natural cells, they would be in no shape or form "alive" and therefore not human.


----------



## Dave (Nov 6, 2009)

Manarion said:


> But most importantly, every truly alive animal and plant on this planet and any living species on any other planet are made up of organic natural cells. Not even viruses are considered to be "truly alive" because they are simply made up of incomplete genetic coding, not of so much as one complete natural, living cell.
> 
> So unless these sentientbots would be made up of natural cells, they would be in no shape or form "alive" and therefore not human.


I agree, the time when terminators replace humans is some time away. Unless we can build organic machines? Otherwise we will stumble up against:
- the beautiful way DNA replicates (and the quality control)
- the ability of our bodies to repair themselves
- the ability of our bodies to power and supply themselves

I mean, what machine keeps working for 80 years+; practically, renewing itself from the inside out, every X number of years; repairing itself when broken, or going for help to do so?

Yet, oddly Androids such as Data and A.I. always live very long lives and always look young and new.


----------



## skeptical (Nov 7, 2009)

I have been in various discussion about what makes a person 'human'.    This is a common concern when discussing, for example, abortion.   If we kill a fetus in the early stages of development, is that murder?   It is only murder if what we kill is human, since we already are in the business of killing animals, and we do not define that as murder.

I decided some years ago that what makes us human is that mass of tissue between our ears.   A person who suffers a terrible accident and loses arms and legs, is still human, because they still have a brain and still think.   On the other hand, a child born with the birth defect anencephaly, has no forebrain, and will never be able to think.   That child is not, in my view, human, in spite of having normal seeming arms and legs (which will never move, due to a lack of the control organ).

Is a robot built with a human type consciousness and intelligence human?   I think it is more human than the anencephalic child.

There is also the suggestion made by many people, that in the future, we will be able to download human consciousness and memories onto a computer.   That computer will then have the thoughts of the person who is downloaded.   Is that computer human?    I suspect the answer is yes.


----------



## Drachir (Nov 7, 2009)

I expect that the "What is human question?" could go on forever.  However, if it was truly impossible to distinguish a manufactured human from a real one how would one answer the question?  We are back to the duck analogy once more.  

In any case if humanity is to survive longer than a few million years I expect it will be in some other form that the one we now occupy and I am guessing that artificial humans are a strong candidate for the new form.


----------



## Karn Maeshalanadae (Nov 7, 2009)

Skeptic is right about it to a point. The child is still actually a living animal.

I do not think the robot is.


*Views on abortion are that if an embryo does not have a sufficiently enough developed brain it is not human, but a parasite in the actual definition of the term*




The computers wouldn't actually be truly living creatures. That's the real issue here, and besides, do you guys even know the full spectrum of human emotion? Let's assume for a moment that everyone considers a robot with human thoughts and human emotion actually human, but there's still some of us organic humans around.

Even if treated equally, wouldn't the robots ever turn on us? See what humanity is capable of now-look at what kind of crimes are being committed and wars waged.

Now put into the scenario a metallic human created originally by us. At the first death of a true organic human by a robot human all bets would be off, and human weakness on both sides would quite possibly result in a Terminator-style situation worldwide. Remember the movie I, Robot?


----------



## Dave (Nov 7, 2009)

Manarion said:


> The computers wouldn't actually be truly living creatures. That's the real issue here, and besides, do you guys even know the full spectrum of human emotion? Let's assume for a moment that everyone considers a robot with human thoughts and human emotion actually human, but there's still some of us organic humans around.



That is an interesting point. I was just reading this:
BBC NEWS | Technology | Early origins for uncanny valley

I too find those 3D human animated simulations strangely unnatural for a reason I cannot exactly pinpoint. I also believe that tribalism, and even racism, may be an inbuilt evolutionary response to avoiding disease and a bad gene-pool. If that is true about the 'uncanny valley' syndrome, then we may never accept androids as humans or even as equals.



skeptical said:


> I decided some years ago that what makes us human is that mass of tissue between our ears.   A person who suffers a terrible accident and loses arms and legs, is still human, because they still have a brain and still think.


But some people will always stare at the disfigured face, or the person with no legs, even though they realise it is rude, and you might think must have some sympathy for the person continually being stared at. Look at the response of young children to anyone different. They have not learnt the social etiquette that they should not stare and their curiosity demands that they ask exactly why the person is different.

And maybe we always want Androids to stay just that little bit different. Think of the fear caused if we actually could not tell the difference. It would be like 'Invasion of the Body Snatchers' or the 'Puppet Masters'.

If we think we will grow new organic bodies as receptacles for our brains, then would we simple replace existing bodies, or choose the new design? I expect everyone would have some kind of work done, but with unrestricted choice, fashion would dictate that we all look like David Beckham and Carla Bruni-Sarkozy. I think I would be able to spot those easily too.


----------



## Drachir (Nov 8, 2009)

Manarion said:


> Now put into the scenario a metallic human created originally by us. At the first death of a true organic human by a robot human all bets would be off, and human weakness on both sides would quite possibly result in a Terminator-style situation worldwide. Remember the movie I, Robot?



"I Robot" was, of course, based on a series of Isaac Asimov stories.  The movie in no way resembled the book with the exception of the title.  Perhaps Asimov's best robot novel was "The Caves of Steel" and the following books in the series in which he explored the idea of exactly what makes a person human.  

The theme is similar to the film Blade Runner.  At the end of the film the audience is left wondering as to the exact origins of the hero.  Is he human or is he a more advanced version of the cyborgs he hunts?  And, of course, there is the climatic scene in the film in which the replicant, Roy Batty, reveals that he may actually be more human than the man who hunts him down.  

Something that is creeping into this thread is a preconceived notion of what life and intelligence is.  If one were to encounter a cyborg that was capable of emulating every human emotion and was able to reproduce, how would we know if it was different from us?  The answer is that we probably wouldn't be able to tell the difference.  

Now, as for our robot creations destroying us; I think not.  There is no reason why a new life form need be created with the same murderous attitudes that infect humanity.  There is little point in evolution, mechanical or organic if it merely acts in the same manner as its predecessor.  

There is also the fact that given cosmic accident, there would be no need for a superior life form to exterminate humanity; the universe is going to do that anyway.  The artificial life forms, with their superior durabilty will survive and with it their human consciousness; a consciousness we gave them.


----------



## Karn Maeshalanadae (Nov 8, 2009)

That's just it thought-a consciousness WE gave them. Humans are far from perfect angels and we will never reach that stage, so why are you so assuming that we would ever be able to create perfect angels when human error is taken into account with every single invention on Earth?


We can create things extremely accurately, but there will always be a margin of error, and a life form we would "create" with our intelligence at the point of us being able to create them might very well believe that they, as the superior beings you pointed out they would be, Drachir, deserve the planet more than we do.


The closest we could ever come to humanity surviving into eternity is if we somehow colonize every single life-sustaining celestial body in the universe and I highly doubt that will ever happen.

Of course, that point might be moot anyway if any apocalyptic event takes place before we're able to create exact-replica androids. You DO have to admit there's dangers out in our universe-in our very solar system-right now that could be potentially devastating to our planet. Our sun could supernova, a gamma ray burst could occur nearby, disease and war could ravage the entire planet, all kinds of things can happen.


----------



## Drachir (Nov 8, 2009)

An interesting side to this topic is the idea of cybernetics.  The concept of human-machine interface was explored in the Bernard Wolfe novel Limbo, written in 1952.  In the novel humans are given cybernetic organs that function better than the original and as a result a society evolves into one in  which one is judged by how many mechanical organs he has.  The more complete the mechanical transformation the higher the status of the individual.  Currently we are already moving in this direction with the creation of mechanical devices to sustain or enhance the ability of those with birth defects or injuries that have resulted in amputation or other serious injury.  This is strongly illustrated by Oscar Pistorius, the South African athlete with two prosthetic legs that allow him to out-perform many able-bodied athletes.  It is quite obviously only a matter of time before these prosthetic devices evolve to the point where able-bodied athletes will be easily out-performed.  We will then have to decide whether those equipped with such mechanical devices should be allowed to complete with normal athletes.

 Taking this one step further; how many of us would choose to have a computer wired into our brains if it gave us a better memory, improved our rate of learning, or improved our response time?  I suspect given the number of persons I have seen with music players or cell phones almost permanently attached to their heads, that there are many who would willingly accept such a device.  Also what parent upon receiving the news that their child suffered from a learning disability might not wish to have it electronically corrected?  Given the way medical advances and society are moving, a human machine hybrid is not very far off.  The next step is obvious, and that is to make such interfaces permanent.  

Can people who are part machine still be considered human?  I think they can and will be.


----------



## skeptical (Nov 8, 2009)

To Manarion

Human survival for eternity is really a bit too much to even consider.   Eternity is a long time, and even our own universe will not survive to eternity.

However, human survival for a very long time is quite possible.   I have thought for some time now that humans that colonise other parts of our solar system, and other parts of our galaxy, will not live in conditions in the least like planet Earth.

Most planets will be like Mars or Venus.   Some might be possible to terraform, but that takes a hell of a long time.   I read one estimate of 10,000 to 100,000 years.  I cannot see anyone getting enthusiastic about starting a process that will not finish for that length of time!

Instead, those who colonise places like Mars will burrow underground, and build extensive artificial habitats.   It occurred to me that a sensible plan of colonisation would be to build those habitats in space, and totally avoid all the problems of having to go up and down into a gravity well for travel.   They can be given gravity from spin, and radiation shielding by storing water in a 2 metre thick external shell.  Such a habitat would have all the features of an underground one on Mars, and none of the detrements of fighting gravity.

If human colonisation and reproduction leads to large numbers of such habitats in space, they can easily disperse to other places.   With enough dotted around the galaxy, human extinction would be seriously unlikely, and humanity, or whatever descends from humanity could still be alive in millions of years.


----------



## TheEndIsNigh (Nov 8, 2009)

Manarion said:


> I honestly don't see it that way. Those with the power to bring about humanistic AI are the ones IN power, and they're bent more towards war and destruction than they are towards peace and creation. Robotic soldiers are far more likely to pop up than robotic doctors or robotic teachers.
> 
> I honestly feel that if we outlast any possible natural event by the time we perfect cyborgs, they'll be the ones to destroy us, not save us. And I use the word "us" loosely, in that you're describing scraps of metal and wires as "human".
> 
> I, for one, honestly believe that any continuation of truly intelligent AI that has the capabilities of performing acts upon their own with human emotion and conscience will USE that human emotion and conscience-the way we humans are using it now. Negatively.


 
Obviously I like this talk of the end and although I disagree with the timescales it brings a warm glow to the heart all the same.

However, I have to disagree on your theory that artificial humans would be more likely to used for destruction. Even given the benefits of mass production an artificial life form capable of carrying and safely (to the maker) using a gun would be quite expensive (compare car manufacturing costs) 

Humans are much cheaper and much easier to produce. 
They are easily inspired to murder and destroy.
They require nothing more 'logical' than a presumed slight to start fighting. 
They carry on even when the correct option would be not to start a fight that anyone with a morsel of reasoning power could see would be pointless. 
They hate with a passion. 
They love to see others suffer at their hands.

No the human is the perfect fighting machine. If you introduce intelligent rational thinking elements into a war I think you'll find it ends pretty quickly.


----------



## Nik (Nov 8, 2009)

Brains in Silicon

Neurogrid.

I played with 'simulated neurons' many years ago, but those were implemented with analog components and were too few to matter.

The Neurogrid should scale well. I know just enough about 'emergent complexity' to shrug. 

Big question is if they continue to play with it in their lab, or let a zillion amateurs loose on the tech. Robotics enthusiasts are more likely to figure ways of integrating this stuff with 'conventional' systems...

---
D'uh, I must point out that, unless both parties in a war can agree to differ and settle quietly, the coldest, most rational thinking can still come down to bloody war. 

Read Game Theory and weep...


----------



## skeptical (Nov 8, 2009)

Endisnigh

I utterly, totally, and absolutely *disagree* with your statement that humans are perfect fighting machines. That is unscientific BS.

A little discovery was made by the American military early in WWII. And that is, 85% of soldiers do* not* want to kill the enemy. This behaviour is so strong that, when faced with a mass of charging enemy, this 85% will either shoot over their heads (deliberately missing), or not discharge their rifles at all.

Here is an ex soldier's experience.
To Kill or Not to Kill | The Progressive

For the Vietnam War, for the first time in history, the soldiers underwent special training to overcome that reluctance. Sadly, from Vietnam onwards, soldiers now actually aim at the enemy and shoot to kill. However, to achieve this, the armed forces have to apply lots of very special (and to me, abhorrent) training.

This training 'dehumanises' soldiers, and leads to psychological illness.
Amazon.com: On Killing: The Psychological Cost of Learning to Kill in War and Society (9780316330114): Dave Grossman: Books

This reluctance to kill was probably even more marked in times of old, when killing was done up close, and you could see your enemy's face. I read an account of a tribal battle in Fiji in the days when Europeans were just beginning contact. Apparently two groups, each about 200 strong, attacked each other. After a very vigorous half hour, both withdrew. Death toll - 2.

In the American civil war, there were battles where so many bullets were fired, that a bullet statistically, should have passed through every square inch of possible target. (As worked out by archaeologists who dug up and counted bullets). Yet in those battles, most soldiers survived. Obvious answer - most of those bullets were not aimed to kill.

No, humans are not perfect soldiers. Most of us are peaceable individuals who do not really want to harm others. We would prefer to be at home with our families.


----------



## Drachir (Nov 9, 2009)

skeptical said:


> Endisnigh
> 
> I utterly, totally, and absolutely *disagree* with your statement that humans are perfect fighting machines. That is unscientific BS.
> 
> ...



I am a little less inclined to credit humanity with benign motives on the battlefield.  Ancient armies and soldiers showed that they were perfectly capable of butchering one another on a fairly large scale, and with very few moral reservations.  As for the American Civil War, the number of rounds fired compared to those that hit has more to do with the quality of the firearms used than the desire not to hurt the enemy.  

You are certainly correct about humans not being perfect killing machines, which is why we invented machines to help us along, such as multimillion dollar aircraft.  These devices truly have no conscience.  Push the button and the bomb is on the way - unless the pilot has a very unlikely twinge of conscience and aims elsewhere.  

I cannot disagree with TEIN's observation that humans are cheap killing machines, except to point out money has seldom been an object when it comes to perfecting better ways of killing.  One merely has to look at the cost of the Manhattan Project or the current US spending on weapons research.  

Obviously robots can and have been created to kill.  However, I am not speaking of mindless, electronic, single-purpose devices.  I am talking about artificial organisms that would be human in every aspect except one - and that is the mindless desire to kill members of its own species.  It should be noted that for the most part humans are the only species that deliberately attempt to exterminate other members of their species.  Of course, there is an argument that warfare is learned behavior and that it can, therefore, be unlearned or not learned at all.  But that might be a topic for another thread.


----------



## skeptical (Nov 9, 2009)

Drachir

I am very aware (painfully so) that political correctness teaches that humans are nasty animals.  Political correctness is NOT scientific correctness.   In fact, rather often, it is total bulldust!!!

The truth is that most humans are actually highly gregarious and extremely reluctant to be the agency by which another human is hurt.   Numerous studies have shown that about 10% of humanity can be described as psychopathic - meaning that they are prepared to hurt other humans - but the other 90% are not.

A suitable comparison with another species is chimpanzees, which are our closest genetic relative.   Believe it or not, chimps are not only willing to go to war with, and kill other chimps, but so it with a degree of regularity.
Killer chimps fuel debate on how war began

By and large, humans are actually rather nice individuals, and care about their neighbours.   I suggest that you ignore the politically correct garbage that describes people as monsters, and actually judge others by what they are.


----------



## HareBrain (Nov 9, 2009)

I've found the last few posts really interesting. Skeptical, are you saying that before Vietnam, most of the killing done on the battlefield would have been by the 10% of people that could be classed as psycopathic? (Though probably much greater than 10% of combatants, as they would have been more likely to put themselves in that situation in the first place.)

I find it interesting that this reluctance to kill finds almost no place in war fiction, whether films, TV or books. Why would this be? And how does it square with the famous experiment (forgotten its name, sorry) where test subjects seemed only too willing to inflict pain on another, as long as they were told to do so by someone in authority?

From my experience, I would say that people are by nature generally friendly to their fellow humans, except where this has been eroded by stress, or where they have been persuaded that some people are less human than they are.

I was interested in the Fijian story. It reminded me of the theory that tens of thousands of years ago, hunters performed "magical" rites which were intended to reduce the guilt they felt at killing their prey, who they saw as their brothers. This lack of differentiation between themselves and others might have been shared by the Fijian tribesmen, who probably had a less developed sense of "separateness" than is normal in our culture.


----------



## mosaix (Nov 9, 2009)

skeptical said:


> The truth is that most humans are actually highly gregarious and extremely reluctant to be the agency by which another human is hurt.   Numerous studies have shown that about 10% of humanity can be described as psychopathic - meaning that they are prepared to hurt other humans - but the other 90% are not.



For the most part this is true, skeptical, except when they are given permission to, or ordered to hurt others by 'a higher authority'.

The famous Stanley Milgram experiments take some explaining away:

Stanley Milgram: Obedience to Authority Or Just Conformity? | PsyBlog


----------



## Ursa major (Nov 9, 2009)

Even if "only the 10% who are psychpaths kill" position were to be true, it's not that comforting, given that the number in the armed forces is such a low percentage of the population. (After all the years in Afghanistan, only a cumulative total of about 75,000 UK** troops have been sent there, yet the UK's population is 61 million.)

If it were true, though, wouldn't army recruitment for those destined for the front line try to find members of that psychpopathic 10%? Do they? (And would we really want to train these "10% folk", of all people, to wield weapons?)




** - And a lot of them are not actually citizens of the UK (not until later, anyway).


----------



## Dave (Nov 9, 2009)

Ursa major said:


> If it were true, though, wouldn't army recruitment for those destined for the front line try to find members of that psychpopathic 10%? Do they? (And would we really want to train these "10% folk", of all people, to wield weapons?)


Quite lot of soldiering is not just about "travelling the world, meeting interesting people, and then killing them!" If you have seen those TV and cinema advertisements for becoming a British Army soldier it is all about peacekeeping and the making difficult decisions in order to save lives. But I'm sure that were I ever mis-fortunate enough to become a soldier, the last thing I would want is for all my comrades in arms to be psychopaths. They may well kill more of the enemy, but I'd be more worried they would kill each other, and little me too. BTW wasn't this the plot of 'Inglorious Bastards' and 'The Dirty Dozen'?


----------



## zaelyel (Nov 9, 2009)

If mankind manages to create a truely 'human' robot then I don't think it would be more dangerous than the average human. Assuming that this robot is human in every way, then the only issue it would have would be the artificial way it was created. What I mean is that I would not suddenly turn evil if found out I was in fact a robot.

I think that there would be more uncertainties if the robot perhaps was self- conscious and intelligent but had none or few of the other attributes that makes us human such as feelings or senses. In such a case it would be difficult to predict how it would process data, as it would be a totally new being, who can think logically and intelligently, but is lacking in other areas.


----------



## skeptical (Nov 9, 2009)

To hairbrain

I am not sure that I would say the 10% of the soldiers who shot to kill were the psychopaths.   Some would probably have just been decent men who over-rode their decent impulses out of a sense of duty.  Some would have been people who are more obedient to authority, even when that goes against their decent impulses.   In fact, I have no idea how many of those soldiers who killed were psychopaths.

The thing is, though, that those decent impulses preventing most people from shooting to kill even in war, can be subverted.   Modern armies use training techniques specifically designed to cause decent men and women to overcome their decency and become killers.   For example :  before Vietnam, soldiers were trained to shoot at targets with bullseyes.   From Vietnam onwards, those targets were replaced with images of the enemy.  When a soldier is used to shooting plywood cutouts of enemy soldiers on the practise field, they are more likely to do so in reality.

To mosaix

Referring to the Milgram experiments is a valid point.  Certainly, the willingness to cause harm when commanded to by authority is a human trait.   However, the fact that (before Vietnam) 85% of soldiers were not killers is also a reality.  One fact does not render the second fact invalid.


----------



## Drachir (Nov 10, 2009)

skeptical said:


> Drachir
> 
> I am very aware (painfully so) that political correctness teaches that humans are nasty animals.  Political correctness is NOT scientific correctness.   In fact, rather often, it is total bulldust!!!
> 
> ...



Did you read my last sentence?  I think we are on the same page.


----------



## Ursa major (Nov 10, 2009)

Dave said:


> Quite lot of soldiering is not just about "travelling the world, meeting interesting people, and then killing them!" If you have seen those TV and cinema advertisements for becoming a British Army soldier it is all about peacekeeping and the making difficult decisions in order to save lives. But I'm sure that were I ever mis-fortunate enough to become a soldier, the last thing I would want is for all my comrades in arms to be psychopaths. They may well kill more of the enemy, but I'd be more worried they would kill each other, and little me too.


 
I agree.

I have serious doubts about this 10% figure (and/or the accompanying definintion of psychopath). And the thought that only those with this tendency are able to shoot a rifle and kill an opposing soldier seems to me to be nonsense.


----------



## skeptical (Nov 10, 2009)

Ursa

I have not said that the 10% (actually 15%) of soldiers who, without special training, will shoot and kill the enemy, are psychopaths.   Do not forget that this study was done in WWII - when most of the soldiers were volunteers.   I suspect a lot of the ones who killed did so out of their sense of duty.  They may well have been utterly decent men.

The 10% who are psychopaths is a figure for the general population.   And the definition relates to their total lack of caring for other people.   It does not mean they are running around doing a 'Jack the Ripper' impersonation.   Most such psychopaths generally behave themselves, because they are not stupid, and they know they will get punished if and when they get caught.   However, they are totally prepared to commit crimes, if they believe the crime will be to their benefit, and they will get away with it.

The best way to minimise the damage to society by psychopaths is to set up policing of sufficient efficiency to convince the nasty bastards that they will *not *get away with it.


----------



## Ursa major (Nov 10, 2009)

I know what the 10% was supposed to represent.

My point was that if only those with psychopathic tendencies could be persuaded to kill the enemy, those troops required to kill the enemy would, all other things being equal, be chosen from amongst that supposed 10% of the general population. But I don't believe the 10% figure; and I do not believe that a soldier who has killed an enemy combatant is likely to be of a psychopathic tendency. Things aren't that simple and the real world is not as neat as some would have us believe.

(By the way, you brought up both circumstances: most ordinary humans (15%) supposedly being unwilling to kill; society supposedly containing 10% with psychopathic tendencies. If these two percentages are unconnected, why mention them both, as at least one of them must be irrelevant to this thread? )


----------



## skeptical (Nov 10, 2009)

Ursa

I think we agree.
The two figures are, as you said, quite separate.   10% of society has psychopathic tendencies, and 15% of WWII soldiers were the only ones to aim at and shoot the enemy.   I cannot know why 15% is prepared to shoot the enemy.  That is just the result of the study.   I could speculate.   I doubt if it is because they are psychopaths, though a few may be.


----------



## mosaix (Nov 10, 2009)

I do believe that practically every male (and maybe female) would kill to protect their family. Whilst not wanting to, I certainly would if forced to by circumstances.

And I think this extends protecting 'the country where my family lives'.

I spoke to my father, uncle and father-in-law about their experiences in the Second World War.  None of them had the opportunity to kill enemy soldiers but all three made it clear that they would have done so if the need arose. They, therefore, fall into the 'not killers' category. But that doesn't mean to say that they weren't potential killers. 

And on another point, the Russians lost 20 million in the Second World War, I can't believe that they were all killed by just 15% of the German army. 

Also, skeptical I think you are wrong that most of the soldiers in WWII were volunteers, this was true in WWI but not in WWII. 

Perhaps I am misunderstanding your general point, skeptical. If so, I apologise.


----------



## skeptical (Nov 10, 2009)

mosaix

Most of the Russians who died in WWII died of famine and disease, not lead poisoning.  

Let me point out, though, again.   The right (or wrong?) kind of training can change the statistics dramatically.   15% willing to shoot and kill becomes 85% willing to shoot and kill if that training is applied.   I do not know what training the German soldiers received.

I suspect that if you asked the soldiers heading off to fight in WWII if they were prepared to shoot and kill the enemy, most would say yes.   It is just that, on the battlefield, only 15% actually did so.  And this seems to have been the case in previous shooting wars also.  There is a big difference, especially with war, between what you *think* you will do, and what you actually will do.

As far as defending your family is concerned....
Statistics show that, when confronted by burglar or criminal home invader, those who pull out a gun are about 4.5 times as likely to be shot and killed compared to those who submit.   And that goes for their families also.  So the smart thing to do to protect your families is not to resist.   Sorry.  I know we all have 'hero' fantasies.   ( I know this statistic, since I was recently on a gun control debate in another forum.  Really more like a gun control battle royal.   People get so emotional on that subject!).

Just another point.  Not part of any argument.  This is just something I found interesting, and I hope you will also.   My facts here may be shakey, and I do not have a reference.  Sorry.   This came from a newspaper article over 20 years ago, and my numbers may not be exactly correct, but they will not be far out.

This newspaper carried out a survey of a load of people and asked :   "If you were offered $ 100,000,000 to kill a total stranger, and you knew with 100% certainty that you would get away with it, would you do it?"

And about 85% said :  "Yes!"

Of course, if that situation actually happened, they very likely would find themselves unable to go through with it.   However, the answer makes it clear that there is a little of the psychopath in most of us.


----------



## Dave (Nov 10, 2009)

I'm not convinced about these arguments; for a start I think there is a huge difference between running through someone with a bayonet and seeing their blood and guts spill out, to manning a machine gun above the first world war trenches, to pressing a button that launches a nuclear attack. Somehow the first of those seems the hardest to do, and yet the latter causes the greatest death toll.

I would say that the more automated war becomes, the more it can be distanced from the actual killing fields. I'm convinced that the pilots of Enola Gay, or those in Bomber Command who fire-bombed Dresden, would have never been able to shoot a complete stranger in the back of the head. To bring the discussion back to the original subject, the use of robots in warfare does exactly that, it makes warfare automated. If the robots are doing the killing rather than the people behind them, it will be easier to kill.


----------



## skeptical (Nov 10, 2009)

Dave

You are probably quite right.  It is much harder to kill someone when you can see his face.   The statistic I quoted referred to soldiers on a battlefield with rifles.  Pressing a button that drops a bomb on unseen victims would be much easier.


----------



## mosaix (Nov 10, 2009)

skeptical said:


> Let me point out, though, again.   The right (or wrong?) kind of training can change the statistics dramatically.   15% willing to shoot and kill becomes 85% willing to shoot and kill if that training is applied.   I do not know what training the German soldiers received.


Now I understand your point better. 



> I suspect that if you asked the soldiers heading off to fight in WWII if they were prepared to shoot and kill the enemy, most would say yes.   It is just that, on the battlefield, only 15% actually did so.


Maybe only 15% got the opportunity. As I said the three people who I've spoken to didn't get the opportunity to actually fire at the enemy. 




> As far as defending your family is concerned....
> Statistics show that, when confronted by burglar or criminal home invader, those who pull out a gun are about 4.5 times as likely to be shot and killed compared to those who submit.   And that goes for their families also.  So the smart thing to do to protect your families is not to resist.   Sorry.  I know we all have 'hero' fantasies.


That wasn't the scenario I was thinking about. But anyway, the fact remains that people *do* do it, even if they get killed in the process.


----------



## skeptical (Nov 10, 2009)

Mosaix

Maybe I am not communicating very well.  The 85% of soldiers who did not shoot to kill were 85% of those at the battlefront, with loaded rifles in their hands, and staring at the enemy.   Not those who never got to face the enemy.

The whole point is that it is not easy to face another person and shoot him/her.   Sure, there are psychopaths.   However, most people find it very hard to do.   They can be trained to it, but without special training will either not pull the trigger, or aim to miss.

I find these figures very comforting.   I would much rather that humanity was naturally peaceable than the opposite.


----------



## Drachir (Nov 11, 2009)

There is some encouraging evidence that violent behaviour is learned rather than instinctive.   A classic example of such learned behaviour is that of the Vikings who were a murderous scourge during the eighth to tenth centuries and who are now among the most peaceful of the world's people.  The same can be said of the Mongols BTW.  There are numerous other examples of this sort of change in violent behaviour.


----------



## HareBrain (Nov 11, 2009)

skeptical said:


> The 85% of soldiers who did not shoot to kill were 85% of those at the battlefront, with loaded rifles in their hands, and staring at the enemy. Not those who never got to face the enemy.


 
Any more info on the circumstances in which this observation was made, or how it varies with the conditions? I can believe the 85% figure when shooting at enemy soldiers crossing an open field several hundred yards away, but I'd have thought it would come down considerably when the soldiers were being charged at by an enemy that outnumbered them. Fear can be a powerful dehumaniser.


----------



## skeptical (Nov 11, 2009)

A simple google search reveals heaps of information on this.   For example :   On Killing: The Psychological Cost of Learning to Kill in War and Society

_"On Killing: The Psychological Cost of Learning to Kill in War and Society
by Lt. Col. Dave Grossman
__Non Fiction_

_The good news is that the vast majority of soldiers are loath to kill in battle. Unfortunately, modern armies, using Pavlovian and operant conditioning, have developed sophisticated ways of overcoming this instinctive aversion. The psychological cost for soldiers, as witnessed by the increase in post-traumatic stress, is devastating. The psychological cost for the rest of us is even more so: contemporary civilian society, particularly the media, replicates the army's conditioning techniques and, according to Grossman's controversial thesis, is responsible for our rising rate of murder and violence,_ "

I could do all sorts of searches on google and post literally hundreds of references.     The basic point is that humans are reluctant to kill.  Believe it!


----------



## Drachir (Nov 11, 2009)

skeptical said:


> A simple google search reveals heaps of information on this.   For example :   On Killing: The Psychological Cost of Learning to Kill in War and Society
> 
> _"On Killing: The Psychological Cost of Learning to Kill in War and Society
> by Lt. Col. Dave Grossman
> ...



I am not disagreeing with the idea that killing is learned behaviour, as my previous post shows, however, it is worth noting that for a species that is reluctant to kill its fellow humans, history has shown that we are awfully good at it, and have been for the last eight millenia or so.  

One factor that may have some play in this phenomenon is the fact that historically human lifespans have been short (forty years or so until the last century).  This means that in the past many societies were led by relatively young men.  Napoleon and Alexander serve as good examples of this.  With a large number of young men placed into situations that were likely to lead to violence a considerable amount of mayhem ensued.  It is interesting to note that the declining rate of violent crime in modern society is probably due in a large part to the aging of the population.  In other words young people are more likely to use violence to solve problems and see warfare as an appropriate (and sometimes glorious) way to solve disputes.  

To return to my original topic, there is no reason why artificial intelligence should inherit this youthful enthusiasm for violence and murder.  In fact, given the fact that an AI can be programmed to act in a manner that is desired, there is no reason why AIs would turn on their creators or in any way be violent any more than a truck is violent any other machine is violent.


----------



## Chinook (Nov 12, 2009)

Anyone ever see the movie "Alice's restaurant"?  If not, look it it up, and you'll understand why I brought it up. 

Here's something to think about:

What is likely to happen is that they will make laws that prohibit dangerous robots, and/or androids. _Then some rougue androids will go and kill the lawyers who made those laws. Perfect solution. Everyone is happy. _


----------



## Drachir (Nov 12, 2009)

Chinook said:


> Anyone ever see the movie "Alice's restaurant"?  If not, look it it up, and you'll understand why I brought it up.
> 
> Here's something to think about:
> 
> What is likely to happen is that they will make laws that prohibit dangerous robots, and/or androids. _Then some rougue androids will go and kill the lawyers who made those laws. Perfect solution. Everyone is happy. _




Fantastic Film.  One of my favourite.  Loved the blind justice bit.


----------



## skeptical (Nov 12, 2009)

Can't kill lawyers.   Half of any government is made up of lawyers, who control the military, and look after their own.    Why do you think so many legal loopholes exist which give lawyers a chance to make lots of $$$$?


----------



## Chinook (Nov 12, 2009)

I'm sorry for this, but all I have is another joke.

If you want to find out how human an android is, try giving him a few "test tickles". 

(groan)


----------



## Drachir (Nov 12, 2009)

Chinook said:


> I'm sorry for this, but all I have is another joke.
> 
> If you want to find out how human an android is, try giving him a few "test tickles".
> 
> (groan)



You deserve to be robotamized for that comment.


----------



## Window Bar (Nov 25, 2009)

As life forms, is procreation not our mandate? Ain't it fun to have free will?


----------

