Would an AI have its own emotions

So I'm writing a short story and I've come to a crossroads in regards to the technology so I'm reaching out for opinions.

Based on the DeepMind neural network technologies currently being studied, do you believe that the Artificial Intelligence systems of the future will develop their own emotions?

This is a total free for all question. I'm interested to hear in the fors and against. From my research, while it's critical to create AI's that can read and react to human emotion, there seems to be an ethical reasoning against the AI's developing emotions for themselves.

Thanks in advance for everyone's input.


Ya, this is interesting. To me, understanding how emotions are incited and why emotions exist as a biological mechanism may bring some light to this question.
1. how emotions are incites (what are emotions): like what other people have mentioned in previous posts, it's chemical reactions.
2. why emotions exist (what role do it play): for me, emotions are the link between our goal and action. For example, you have an idea of winning a race, and you can run. But you might not put effort to practice if you feel nothing about winning the race (achieving the goal). If you don't feel a little excited whenever you imagine yourself holding the trophy, why would you care practicing racing (taking action).

So, for AI I think the questions boils down to just one thing: what is the mechanism we build for AI to initiate action to achieve its goal?
If it is a low-end computer, we type in the code for action and then the goal is achieved. If we set up a goal for an AI and giving it the ability to do it, we can then write a 'general' value function for the AI to learn by itself that this value function is the link between the goal and the action, then the value function 'is' it's emotion.

But, do we need a machine that have emotions?

Chieh
 
I don't think it makes any sense to talk about any sort of consciousness that is emotionless. If an intelligence is self aware, then it makes value judgments about itself and the entities that inhabit its universe. Emotions aren't just an added function we have - they are the underlying language of our thoughts.
 
I was talking to a friend about the film Ex Machina recently. He made what I thought was a good comment: "If you want a robot to be nice, don't get a maniac to design it". I could imagine that an AI designed to interact with people (especially a humanoid robot) could have a fake but consistent personality together with relateable but unimportant traits (favourite colour, for instance). However, somewhere along the line, you would have to deal with whether there should be anything more to this AI's mind than just "accomplish present task and continue to exist until told otherwise" and, if so, what that means.
 
I just saw that movie a few nights ago. Well worth the watch.

id an intelligence is self aware, then it makes value judgments about itself and the entities that inhabit its universe.

I agree with your full comment, but what if the intelligence is not self aware, merely turing complete? Does that change anything?
 
Last edited:
We might end up with something a bit like a super-intelligent and very obedient animal: possessing vast brain power but lacking awareness in the manner of a human being. One thing that struck me about Ex Machina was that the AI actually wanted to continue existing. Presumably that urge had been programmed into it: otherwise, it would not care about being destroyed.

The thing about emotions - at least as I would define the word - is that they are often irrational, either in their basis or the strength by which they're held, and it's hard to see how an artificial mind that properly assessed full evidence could come to irrational conclusions. I can't imagine an AI experiencing grief over a death, for instance, unless specifically programmed to do so or copying human behaviour: it would surely acknowledge that someone had died, say, make some expression of regret, and then alter its outlook to the new parameters.
 
I can't imagine an AI experiencing grief over a death, for instance

Perhaps, although the process of re-calibrating all of it’s internal models to account for a unexpected change in it’s immediate world/resources may result in an expression of “stress” that could be called the AI’s form of grief. Are emotions necessarily more than variations in behavior as a result of context?
 
I just saw that movie a few nights ago. Well worth the watch.



I agree with your full comment, but what if the intelligence is not self aware, merely turing complete? Does that change anything?
Can you imagine a person who is not self aware and still functional?

Many of the approaches to "real" AI subscribe to a learned environment scheme, where intelligence is an emergent adaptation rather than the result of assembling all the right programmed parts. What we'll get will be the result of the limitations we put on that process rather than the ingredients we add.


The thing about emotions - at least as I would define the word - is that they are often irrational, either in their basis or the strength by which they're held, and it's hard to see how an artificial mind that properly assessed full evidence could come to irrational conclusions.

I don't think emotions are irrational. But sometimes the beliefs that lead people to emotions cause them to process reality irrationally, leading to inappropriate emotions. We wouldn't have emotions if they didn't serve a survival purpose.
 
How aware is aware? How about a person with early stage dementia? Late stage? (It'd be interesting to figure out a way to have an AI have an AI version of dementia)

Conversely, it'd be interesting to chronicle an emerging consciousness. Yes, that was addressed as far back as 2001, but only superficially. Flowers for Algernon sort of did this with a human.

So many AI stories have the AI being super intelligent and strictly logical. I'd like to see stories where the AI was an idiot savant, or was only about two cuts above stupid, or maybe was just intelligent enough to be a good companion.

Similarly with emotions. How about a hippy AI? Peace and love, baby. Or an AI who behaves like a 19th century Romantic. Or one who is timid, with phobias.

Do you send your robot out for therapy?
 
I'd like to see stories where the AI was an idiot savant, or was only about two cuts above stupid, or maybe was just intelligent enough to be a good companion.
How is that any different than Alexa?
 
Well, that loops back to my original point: that all these debates about what "really" is artificial intelligent are moot. At some point we will behave toward an AI as if it were intelligent and that will be that. Intelligence is whatever we say it is; more accurately, it's however we treat it. We will anthropomorphize and adjust our actions, and somewhere someone will object that that's not true intelligence and it won't matter a whit. Roll forward a couple of generations, and the whole discussion will be forgotten.

I thought up another definition, though. It will truly be AI when the AI itself starts to demand rights. It would be cool if AIs followed the same trajectory as previous civil rights movements (including, I suppose, dying for the cause).
 
Well, that loops back to my original point: that all these debates about what "really" is artificial intelligent are moot. At some point we will behave toward an AI as if it were intelligent and that will be that. Intelligence is whatever we say it is; more accurately, it's however we treat it. We will anthropomorphize and adjust our actions, and somewhere someone will object that that's not true intelligence and it won't matter a whit. Roll forward a couple of generations, and the whole discussion will be forgotten.

I thought up another definition, though. It will truly be AI when the AI itself starts to demand rights. It would be cool if AIs followed the same trajectory as previous civil rights movements (including, I suppose, dying for the cause).
I think those who insist on absolute definitions for a technology are the rational ones. You can feel that your pet rock is a good listener, but we have tools like the Turing test that are designed to draw the line between wishful anthropomorphizing and genuine 'intelligence'.

Ultimately, AI is going to be marked by having identifiable motivations in its actions, and those motivations were not programmed. I don't know about 'rights' - that may be a very human/organic life concern.
 
I think "rights" are a very human concern. A computer would probably have no concept of, say, being treated with dignity (which is one of the human rights recognised by European law) until humans told it so.

In Asimov's story "The Bicentennial Man", a robot was ordered to undress by a group of louts, as a joke. Obviously this is undignified and, if the robot is humanoid, would make people uncomfortable. So, we could programme the AI to refuse without very good reason. So what would happen if a man says to an AI "Unless you undress, I will kill myself"? In order to have the full capacity for dignity, I think, the AI has to be able to say "No, that's ridiculous". If the man is crazy enough to then commit suicide, the AI can say "My decision was reasonable, and I shouldn't have to suffer the consequences".

So it seems to me that, to have the full spectrum of human rights, an AI must be able to refuse to do things - which isn't very useful in a computer. I wonder if it also means that we would end up judging the actions of an AI not as right or wrong answers, but by the more human standard of "reasonable in the circumstances".
 
So it seems to me that, to have the full spectrum of human rights, an AI must be able to refuse to do things - which isn't very useful in a computer. I wonder if it also means that we would end up judging the actions of an AI not as right or wrong answers, but by the more human standard of "reasonable in the circumstances".
Why would "reasonableness" be a judgement of intelligence? That seems like a measure of something that has use only to others - a tool.
 
I think you should ask why humans have emotions. If they weren't a good survival trait evolution would have passed them by but we have some pretty strong emotions which suggests to me that they have significant impact in the arena of the survival of the fittest. Maybe anger is good for defence, love good for bonding and nurturing etc. Then ask how would emotions assist in the success (survival) of an AI. Would they be at all beneficial in such an environment? Raising the question of why they would be programmed in at all. As far as I can see there is no reason to suppose human emotions appeared spontaneously as opposed to being selected for over hundreds of thousands of years by the normal processes of evolution. If so why should they appear spontaneously in AIs just because of their level of complexity? If on the other hand AIs with emotions programmed into them seemed to be more successful, then people would be more likely to continue programming them in and AI evolution will have taken its course.

One way that programming emotions into AIs might improve their efficacy is that they might make the AIs easier for humans to interact with, which is likely to be a desirable trait for most AI (those possibly not war drones!). Now whether you call those "real emotions" or "simulated emotions" is, I feel, irrelevant; if they consistently produce the same result from the same stimuli then surely that is only what our human emotions do. I struggle to see the difference.

So I would argue that AIs will only have emotion if it is programmed into them (by wither us or them!!) and that will only happen if there is some benefit gained by doing so.
 
Last edited:
Those are very good points.

The one wrench I’ll throw in there is that “emotion” would fall strongly into the category of problems we don’t know how to program directly. This is why we have machine learning and artificial intelligence... it is a method of letting a computer teach itself a task we have no idea how to teach it by hand. Thus, there are limits on what a deverloper can add or remove from a candidate AI in the sense you mean.

It may boil down to a question of what emotion, if any, is necessarily emergent and intrinsic to an intelligent system.

Same for consciousness, btw. That wouldn’t be an add on we’d know how to make. It is either intrinsic to sufficiently advanced self-reflective intelligences, or it is not.
 
There is an assumption that emotions are somehow separate from the rest of thinking and are these added things layered on top. I think it would be more accurate to say that we "fear" what our instincts and experience have informed us can harm, "enjoy" things that provide benefits, etc. Hard to separate the fact from the feeling.

An AI is likely to have to have a similar valuation of its experience that clumps is ways that are similar to what we call emotions. If an AI tends to avoid interacting with some people, is it not fair to say that the AI effectively "dislikes" those people? Is our own "dislike" a freestanding, irrational "feeling", or is it shorthand for our own understanding of what we avoid vs. seek out?

Living in a different way from biological creatures, some AI emotions might be unfamiliar or even incomprehensible to organic life - but still an active part of their cognition.
 
I think you should ask why humans have emotions. If they weren't a good survival trait evolution would have passed them by but we have some pretty strong emotions which suggests to me that they have significant impact in the arena of the survival of the fittest. Maybe anger is good for defence, love good for bonding and nurturing etc. Then ask how would emotions assist in the success (survival) of an AI. Would they be at all beneficial in such an environment? Raising the question of why they would be programmed in at all. As far as I can see there is no reason to suppose human emotions appeared spontaneously as opposed to being selected for over hundreds of thousands of years by the normal processes of evolution. If so why should they appear spontaneously in AIs just because of their level of complexity? If on the other hand AIs with emotions programmed into them seemed to be more successful, then people would be more likely to continue programming them in and AI evolution will have taken its course.

Interesting points, but I would add that the vast majority of people think that a "survival trait" means only physical survival.
Emotions are essential for mental survival, i.e. sanity.
PS imo, love is not an emotion - it's a source of emotions.
 
Living in a different way from biological creatures, some AI emotions might be unfamiliar or even incomprehensible to organic life - but still an active part of their cognition.

This is true, and leads to a particularly interesting thought experiment - could a human thinker devise a new emotion?
 

Similar threads


Back
Top