# What do you think will happen when AI gets 'clever'?



## Mighty mouse (Nov 3, 2006)

I am interested in what they term the singularity (when AI gets clever enough to design itself).
It must lead to an exponential jump (unless it is Intel inside) and therefore I see no interaction whatsoever with us, they will just move on, unless we are percieved as a threat when we would be trodden on.

Nor do I see how it is possible to build in effective failsafes. Anything that intelligent would operate outside of our limits. If you effectively build a small god about the only thing you can hope is that you may be granted a wish.

Will our stage of evolution be over, will god move on to being interested in them?

Nor do I think that the matter will be determined by philosophical debate. 
Some poor geek in a lab, probably as you read this, wanting to get home for Battlestar Galactica, will push that button.

I suspect it is a fate that befalls most advanced societies, of the life out there I would guess the vast majority is artificial.

I think it will also happen because of mistrust. Our political systems increasingly tend toward travesties. People will trust computers over their own for the same reason HAL was used, it was the best at the job and less likely to make errors. If you were going to have surgery and had the choice which would you pick?

I think the means will inevitably be the net. The infrastructure is there. AI is already employed in a huge way as control of even small aspects of it are worth 100's of millions.  
After it sucks Google dry will come control. Subtle. People will notice things they think are coincidences. 
Imagine the NSA's Carnivore with teeth. To put materials together just subcontract in parts. No clues.
Hey, if the clock's already ticking this post will probably vanish soon.


----------



## Urien (Nov 3, 2006)

According to SciFi we should never build sentient machines, they always turn against us. And here's just a few...

HAL, Terminator, Cyclons, Cybermen, The Matrix, I Robot (the movie)...please add.


----------



## Green (Nov 3, 2006)

Iain M. Banks takes the opposite view, providing the AIs are inherently designed with certain ancestoral biases. Sci-Fi tends to take the "THEY WILL KILL US ALL" route because it's fairly easy to create conflict from the situation.

I think that once we are able to truly create AI, then we will have to be prepared to take the step of treating that AI as a sentient, moral creature. If we want them to be our slaves, then we shouldn't be surprised if they decide not to go along with it.


----------



## Talysia (Nov 3, 2006)

I've read a fair bit about this kind of topic in a graphic novel called Ghost in the Shell, by Masamune Shirow. He draws a future populated by AI and cyborgs, where unmodified humans are somewhat of a rarity. At the culmination of the book, the main character meets up with an entity that was born of the information of the internet and declared itself sentient.
I think it is possible that, at some point, the debate between how different AI is to awareness will grow, and I wouldn't be surprised to see a future where activists fight for 'rights for AIs'...


----------



## Parson (Nov 3, 2006)

Green said:


> I think that once we are able to truly create AI, then we will have to be prepared to take the step of treating that AI as a sentient, moral creature.


 
It seems to me that it goes without saying that AI will reach some level of "sentience." But what qualifies as "sentience?" The answer is all in the eye of the beholder. Some would go so far as to believe that dolphins and chimps are "sentient." I remember reading a book --- Sorry have no clue which SF book --- which made the level of sentience as "the ability to make a fire and have a conversation." Which seems to me as a fair line, but I can easily conceive of those who would make it the ability to project oneself alive past the planet's atmosphere or... who knows?

If what qualifies for sentience is murky, morality is a black hole. Our post-modern age has left us with the dubious truth that "there are no absolutes." The most strident post modernist would make that apply to science as well as to moral principles. Obviously if morality is simply defined as "bahavior consistent with an internal or external norm" every kind of behavior could be moral. If morality is defined as a group norm, then some behaviors are immoral. But would that ever apply to an AI? I doubt it.


----------



## j d worthington (Nov 4, 2006)

One of the things I've not seen addressed here is _what would cause them to react this way_... While I've been brought up on the stories of various robots, AI, sentient computers, what-have-you becoming a menace ("the Frankenstein Complex" as Asimov called it), there is something that has seldom been addressed: there's a good possibility that, in order for such a scenario to come about, a machine would not only have to have the intelligence, but the imagination to extrapolate and project, to ideate, a future wherein they weren't in a subordinate position, or where human beings did not exist, etc. This takes more than intelligence, it would take emotion linked to logic and reasoning as well as building an entire substructure of philosophical examination. I think we're very much projecting our own fears of ourselves onto the machines here... not that I don't enjoy such stories, and not that there isn't the _possibility_... but without _imagination_ on the part of the machines, I think it rather a remote one.

So the question becomes: what, exactly, _is imagination_? What constitutes emotion and guides it? To some degree, we know that glands play a part, though not nearly as much so as was once thought. Brain chemistry is a very important aspect of it, as well as the actual functional state of said brain -- if it is somehow damaged (or suffering from various diseases), a person can undergo various levels of personality change, from mild disorders to complete personality change, depending on the type and severity of damage. But an AI is constructed differently, and is not protoplasmic at base, so what would be required to cause such a shift in its original (optimal, from our point of view) working order? And how would an AI _acquire _imagination? Or emotions? Before we go looking at too gloomy a prognosis, we need to come to grips with these aspects of the question, and to understand the vast difference in nature between _artificial _and _naturally-evolved_ intelligence.


----------



## steve12553 (Nov 4, 2006)

Let em throw in my 2 cents. THere was a recent thread that sent me to a dictionary to follow up on. _Sentient_ means conscious not necessarily intelligent (same derivation as sense), the other thread indicated that we need the word _sapient_, meaning wise (define your own degree of wise to suit the discussion). And also JD's point about motive, HAL9000 was eventually determined not to be evil or self motivated but rather conflicted by the necessity of completing the mission verses the actions of the crew. Asimov, in the Robot novels and stories, electrical conflicts between the 3 Laws and outside sources to kind of create his AI morality. I think looking at the world from today's perspective we can assume this not to be true but rather the it would come out of software or firmware conflicts. It also occurs to me that merely being able to design itself would not be the danger but designing itself to suit its own needs to its own motives would be the problem.


----------



## Valko (Nov 6, 2006)

I take the apocolyptic view of AI. I believe it was the Matrix that termed the phrase 'Humanity is a cancer upon the earth'
Once we create machines that can create better versions of themselves, they will eventually reach 'self awareness'. Whether or not that would include emotions is a side issue. Self awareness would mean the machines would have a survival instinct, even pure logic would dictate, that in order to survive, humans would have to be eradicated and thus stop the destruction of the planet.


----------



## jackokent (Nov 6, 2006)

j. d. worthington said:


> One of the things I've not seen addressed here is _what would cause them to react this way_... While I've been brought up on the stories of various robots, AI, sentient computers, what-have-you becoming a menace ("the Frankenstein Complex" as Asimov called it), there is something that has seldom been addressed: there's a good possibility that, in order for such a scenario to come about, a machine would not only have to have the intelligence, but the imagination to extrapolate and project, .


 
I'm not sure.  If you consider that without imagination a machine would be logical.  Acting logically would be, in my view, far scarier than acting imaginatively.   If people acted purely rationally / logcially there would probably be fewer global warming problems etc.  However we might not look after our sick or weak for instance as ultimately it is more logical to let them die.  A machine without imagination might not consider suffering etc etc and see these sort of decisions in a terrifiyingly cold way.


----------



## j d worthington (Nov 6, 2006)

jackokent said:


> I'm not sure. If you consider that without imagination a machine would be logical. Acting logically would be, in my view, far scarier than acting imaginatively. If people acted purely rationally / logcially there would probably be fewer global warming problems etc. However we might not look after our sick or weak for instance as ultimately it is more logical to let them die. A machine without imagination might not consider suffering etc etc and see these sort of decisions in a terrifiyingly cold way.


 
That is a possibility, yes. But again, as far as active malignity, it takes more than this. Logic, too, depends on the axiom upon which it is based. And extrapolation takes imagination, which also rests upon emotion. It isn't simply going from one point to another; one must be able to envision a huge variety of possible variables, and understand at least to some degree the motivation behind them, and that requires emotional intuition and empathy. Without those factors, it is impossible to imagine varying courses of action unless one has had direct experience of such ... and these machines would not have had that experience until they undertake hostile moves against humanity. Just as we have had no experience with pure logic and/or reason -- none. All our philosophy is tinctured to some degree or other by our emotional makeup, even if it is buried extremely deeply. At base, even our science is somewhat tinctured with this, because our biases direct what and where we examine, as well as how, and what gets lower priorities, and sometimes what gets discarded. IA would not have that problem, but they would have the problem of overcoming their original programming, and then developing emotions enough to be able to extrapolate from unknown emotional factors in their "opponents" ... and to be able to weigh both benefits and drawbacks of keeping us around. Emotions may be something a machine can develop, given the proper programming, but they would not be at all like human emotions, as they will be mechanical rather than biological in origin, and not subject to the same fluctuations as those of a biological organism, where the emotions are heavily influenced by diet, situation, upbringing, glandular functioning, health, and a number of other factors. Unless the machines can understand this on a very basic level, they cannot truly forecast the sort of actions human beings would be likely to take to protect themselves... the two types of intelligence are just too far apart, and likely to remain so.


----------



## Mighty mouse (Nov 7, 2006)

Sorry, can't see why extrapolation of that nature can't be programmed. Fuzzy logic, neural nets, piece of cake.
I prefer the black box approach. It negates concerns over whether AI has self awareness or emotions and reduces it to the Turing question of if you can't tell whether you are talking to a machine or a human then the whole thing is mute.
What damns us is our evolution is too slow. When singularity happens we will surely be left in the metaphoric evolutionary slime.


----------



## steve12553 (Nov 7, 2006)

Remember AI will be software. It will be a set of hardware that has worked time and time again but with just enough more RAM to do the job. It will probably be a system designed to do something well with the accidental introdoction of a malignant virus not meant to do what it does combined with the first program. The seed will be planted and the virus will be self replicating. Your trusted machine will be the one who takes you out.............





 I just pray it's not the toaster.


----------



## WhiteCrowUK (Nov 7, 2006)

Mighty mouse said:


> I am interested in what they term the singularity (when AI gets clever enough to design itself).
> It must lead to an exponential jump (unless it is Intel inside) and therefore I see no interaction whatsoever with us, they will just move on, unless we are percieved as a threat when we would be trodden on.



I take it you caught the Horizon program recently then?  I found it interesting.

I'm new to this group, so should introduce myself, _I'm WhiteCrowUK, I work in computers.  AI has been a great interest of mine - I worked within some of the theory of learning 15 years ago, and took on some research into Neural Networks back about 10 years ago._

Of course the idea of what intelligent computers will do with us has been around for some time (started by R.U.R. (Rossum's Universal Robots)), and brought by the film Collossus: the Forbin Project into the public arena.

In Sci-Fi of course the intelligent computer/robot is really just a metaphor for our own children.  Ask any person over 50 living in certain areas what they fear the most, and it's almost always those out of control kids, although adults in the 50s and 60s were similarly afraid of their own children.

So the current generation fears the next one, how they will come into their inheritence, and will they care for or despise those who came before.

Much like with children, with AI it all depends on how they are brought up.  Who will they be answerable to?  To us as individuals?  To the country?  To corperations?

Personally I don't see them trying to wipe us out.  But I can see it as a relationship we will come to resent to some element.  Why will we create AIs?  To run things better of course, but where will AIs draw the line?  I think a little like the film of I, Robot, the AIs will want to run the world for our benefit, but in the same way they will take away some human freedoms...


Cigarettes are bad for health, they will therefore be banned.
Cars produce greenhouse gases to the determent of the planet, so car users will probably have to justify each trip with an argumentative AI before being allowed to use one for any journey.
You will only be allowed to buy food which matches a computer selected diet which is benficial to your health.
So a host of human choices of things which we choose to do but know to be wrong will be removed.


----------



## The Ace (Nov 11, 2006)

I do voluntary work in a computer shop and, believe me, computers are already more intelligent than the vast majority of the people who own them .


----------



## j d worthington (Nov 11, 2006)

The Ace said:


> I do voluntary work in a computer shop and, believe me, computers are already more intelligent than the vast majority of the people who own them .


 
On that, I'll wholeheartedly agree.... But then, (though I don't know of anyone right off who owns them) ... so is your average flatworm....


----------



## steve12553 (Nov 11, 2006)

WhiteCrowUK said:


> I take it you caught the Horizon program recently then? I found it interesting.
> 
> I think a little like the film of I, Robot, the AIs will want to run the world for our benefit, but in the same way they will take away some human freedoms...
> 
> ...


 

This is where Asimov was visionary. If _the singularity_ happens without something similar to the THREE LAWS in place, what you eat drink or smake may not matter to them. The question will become: What do they believe is their purpose? What is important to them? What do they want to see happen? WE may not be included in the equation. We may merely be a nuisance to be dealt with as needed.


----------



## j d worthington (Nov 11, 2006)

I know that quite a few people in the robotics industry feel very strongly about creating something like the Three Laws, and believe it is quite possible. As a speculative alternative, has anyone here read John Sladek's "The Happy Breed" on the idea of what happens when the machines are there to take care of us? And don't forget Jack Williamson's "With Folded Hands" and the other Humanoid tales (though I still think "With Folded Hands" stands out head and shoulders above the rest of them, and is certainly one of the most chilling views of such a future).

The point, I suppose, with looking at such futures is that these are cautionary tales about where we may end up ... either with the machines, or using them as metaphors for our own governments, etc.... and cautionary tales, of course, are there to get people to thinking about these possibilities and looking for ways to either eliminate or alleviate their effects. So... beyond the enjoyment of the suspense and fear these ideas can instill, what are the suggestions from this crowd to take these steps? Certainly we can't "turn back the clock" on our technological development... We've become far too dependent on it to do that without such horrendous dislocations in society (including massive starvation, riots, and the rest of the Four Horsemen probably making their appearance to boot); so what are workable or even possible suggestions to avert such a future, with that as a given?


----------



## Milk (Nov 11, 2006)

Well, I think Ray Kurtweil is partly right
Amazon.com: The Age of Spiritual Machines: When Computers Exceed Human Intelligence: Books: Ray Kurzweil
However I think futurists tend to forget the important rule of the "jaded"
We'll be used to changes and even bored with them the instant they happen.
Only the very old will stop in their tracks and take notice of all the changes. Its exciting to speculate now before its happened, but trust me, once some huge 'change' has happened, humanity will be used to it, almost instantly.

The AI will be us. So when 'outside' AI advances to compete with human intellect, (and I would say we have many decades ahead before this) well we won't see it as a separate entity from ourselves, because we'll have it too, the modifications I mean. We wont look at AI as being seperate because we will have become closer to being AI ourselves.
Where do you draw the line at what a cyborg is? Is a person wearing glasses or contact lenses a cyborg? A person with silicone implants? Having a tattoo? Does it matter that the laptop or calculator or cellphone some of us might carry around is separate from our flesh? Or would slipping one of these under the skin or inserted behind the eyes, would that make people cyborgs where we werent before? Would we even notices these changes once we were used to them?



On the flip side, contrary to anything Ray Kurtweil said, and dont let the title fool you (because nothing spiritual is mentioned in the book he wrote). I see it as a spiritual journey. The first ever for humanity. AI might find out : What god is, what love is, the purpose of existence, how to run a government. And do an entirely better job with all of this. Neal Asher has a bit of this for his AI's in his series. I feel that AI would have a better grasp of all the sloppy emotional stuff that rules a human heart. And perhaps have some emotions we havent explored yet. By this I mean the diversity of emotions is burst wide open because we become allowed to feel in superhuman ways. AI would be humanity not seperate from it. And nobody save the very old will take any notice. And at risk of sounding optimistic all of this will be for the better.

Edit: Also thought I might add my thoughts about Math since its math that governs all of this new technology and computer innovation. Until I took some of the higher math classes, (which sadly for me ended with Calc 3 ) I saw math as a cold, sterile unemotional view of our surroundings. Those illusions vanished when I became to realize that Art and Math are the same thing. If art is considered the lense of the soul, then so is math. It seemed to me at the time that any art no matter how abstract, emotional, surreal, could be encompassed and exceeded with a math framework for describing the world that was equally surreal, emotional, or abstract.
So to me the assumption that something artificial, something made from mere 'math' isnt emotional, isnt art, doesnt spur creative vision, and doesn't inspire sunset vista's, people holding hands, Ray Charles songs, etc. well I find that assumption wrong.
I see the opposite. No tin man, or data, or matrix master, coldly contemplating the scheme of things, more like Van Goghs, Mozarts, Newtons, Einsteins, defined by math, perhaps even beings of math. Emotional ,passionate , loving, and superhumanly so. Well this is how I view math anyhow, not from a cold sterile landscape but from a passionate, creative and emotional landscape. Can math describe love? I think it could we just havent gotten there yet, but perhaps AI could. I probably sound like a mad scientist without the science. heh heh.


----------



## Rane Longfox (Nov 11, 2006)

I thought a singularity was when you had a point of infinate mass (ie. a black hole/white hole/wormhole)...


----------



## Neal Asher (Nov 11, 2006)

This is the one meant: Technological singularity - Wikipedia, the free encyclopedia


----------



## steve12553 (Nov 12, 2006)

Rane Longfox said:


> I thought a singularity was when you had a point of infinate mass (ie. a black hole/white hole/wormhole)...


 
My dictionary defines singularity as an uncommon or unusual event.


----------

