So I'm writing a short story and I've come to a crossroads in regards to the technology so I'm reaching out for opinions.
Based on the DeepMind neural network technologies currently being studied, do you believe that the Artificial Intelligence systems of the future will develop their own emotions?
This is a total free for all question. I'm interested to hear in the fors and against. From my research, while it's critical to create AI's that can read and react to human emotion, there seems to be an ethical reasoning against the AI's developing emotions for themselves.
Thanks in advance for everyone's input.
Ya, this is interesting. To me, understanding how emotions are incited and why emotions exist as a biological mechanism may bring some light to this question.
1. how emotions are incites (what are emotions): like what other people have mentioned in previous posts, it's chemical reactions.
2. why emotions exist (what role do it play): for me, emotions are the link between our goal and action. For example, you have an idea of winning a race, and you can run. But you might not put effort to practice if you feel nothing about winning the race (achieving the goal). If you don't feel a little excited whenever you imagine yourself holding the trophy, why would you care practicing racing (taking action).
So, for AI I think the questions boils down to just one thing: what is the mechanism we build for AI to initiate action to achieve its goal?
If it is a low-end computer, we type in the code for action and then the goal is achieved. If we set up a goal for an AI and giving it the ability to do it, we can then write a 'general' value function for the AI to learn by itself that this value function is the link between the goal and the action, then the value function 'is' it's emotion.
But, do we need a machine that have emotions?
Chieh