Would an AI have its own emotions

We need to start out by defining what an emotion in humans is. Let’s start with:

A chemical reaction that makes us feel good or bad and so makes us act in such a way as to avoid or repeat the activity that caused the reaction in the first place.

Why can people laugh or cry from reading a passage in a book? How does reading some symbols trigger these supposed chemical reactions?
 
Why can people laugh or cry from reading a passage in a book? How does reading some symbols trigger these supposed chemical reactions?

Reading about these symbols is, I suppose, simulating in some way experiencing them in real life and trigering a similar response. The action of reading a passage in a story that, say, invokes fear is much the same as ‘experiencing’ the passage and so triggering the fear. The better the story is written the more realistic it will feel and the stronger the emotion.
 
simulating in some way experiencing them in real life and trigering a similar response..

But an AI would not have those experiences. The only reason an AI would have a human-like body would be because humans deliberately made one. How would that apply to an AI like HAL 9000?

Daisy, Daisy ...
 
But an AI would not have those experiences. The only reason an AI would have a human-like body would be because humans deliberately made one. How would that apply to an AI like HAL 9000?

Daisy, Daisy ...

On my mobile at the moment so won’t go into too much detail but, personally, can’t see an AI having emotions. Giving the appearance of, maybe, but actually experiencing them? No.
 
A bit more detail.

A computer processor has the ability to perform a basic set of instructions - add, subtract, multiply, divide, copy, compare, branch and I/O to communicate with attached devices. So if an AI is running on computer hardware as we know it to-day I can't see anywhere in that instruction set that would allow it to experience an emotion.

Of course running at high speed a computer can give the impression that it's doing all sorts of things - playing chess or go. But it's all an illusion. At any one time a computer is performing one of the instructions in it's repertoire. There's nothing there that thinks or experiences. Even 'learning' systems, at the end of the day are just computers following their basic instruction set albeit backed up by a complex database of 'learnt' information.

Advances in computer technology, such as quantum processors, may change things is the future.
 
A computer processor has the ability to perform a basic set of instructions - add, subtract, multiply, divide, copy, compare, branch and I/O to communicate with attached devices. So if an AI is running on computer hardware as we know it to-day I can't see anywhere in that instruction set that would allow it to experience an emotion.

Of course running at high speed a computer can give the impression that it's doing all sorts of things - playing chess or go.

So if a von Neumann device can be stupid fast enough it can appear to be intelligent. LOL

 
A bit more detail.
At any one time a computer is performing one of the instructions in it's repertoire. There's nothing there that thinks or experiences. Even 'learning' systems, at the end of the day are just computers following their basic instruction set albeit backed up by a complex database of 'learnt' information.

At the fundamental level of the brain, I’m hard pressed to explain how it’s any different. The chemical chain reactions and synaptic firings are forces of nature, with outputs mechanically linked to inputs, which is much the same as “following basic instructions.” Likewise, the particular interaction of those billions of physical events occurs in the context of learned/trained/hardwired information.

Seems to me it is a simple question of complexity. As to if such a system experiences consciousness, who knows. I suppose that depends what you believe about the soul. But strictly in terms of thinking and understanding and even having awareness, these properties are certainly linked to the mechanics of the brain, as is demonstrated when damage to particular areas of the brain disable or interfere with a variety of very particular capabilities. It seems to follow that similar mechanics expressed in silicon, rather than biological proteins, should be able to accomplish *all* the same things.
 
So if a von Neumann device can be stupid fast enough it can appear to be intelligent. LOL


And why not?

I worked with an experimental processor in the 70s that had a reduced instruction set (not to be confused with RISC architecture) of 4 commands - subtract, compare to zero, branch and I/O. The compiler called sub-routines to simulate all the other commands just using the basic 4. This machine played chess quite competently at an exhibition giving the impression of intelligence to the general public. But of course it wasn't - it's an illusion. It was fast because of the simplicity of its processor. It never got off the ground because dedicated numerical co-processors were even faster and simulation sub-routines took up memory that was expensive at the time.

I knew a couple of programmers who worked for Clive Sinclair at the time he said in a TV interview that he thought that computers 'would become self-aware'. "They're just fast adding machines, Clive, just fast adding machines."
 
Last edited:
At the fundamental level of the brain, I’m hard pressed to explain how it’s any different.

So am I. And that's because I'm inclined to think of the brain in just the same way as you. But, and I have to remind myself of this, I'm wrong. It's an easy trap to fall into. Just because a computer can do some of the things that the brain does doesn't mean to say that it can do all of them.

Processors are built to do certain, basic, things. What makes us think that they can do others things that aren't in their design?

The reality is that the human brain is the most complex thing that we know of in the universe and to think that we can understand it or explain it, at the moment, is futile. Computers, by comparison, are child's play. There was an an article about human conscienceness in New Scientist in the last few months that explained just how little we know. If I can find it I'll post a link. If we don't know how a brain does things how can we say a computer is doing them? And BTW no, I don't think there's such a thing as the soul.

One thing of which I'm sure - to compare current computer architecture with the human brain is a mistake.
 
If we don't know how a brain does things how can we say a computer is doing them?

This is a good point, but since we are speaking in terms of theoretical boundaries, all we really need to decide is if there is more going on in the brain than information processing. If the answer is no, then while we may not understand exactly what those biological circuits are doing for decades or centuries to come, we can conclude that, in principle, a computer can do the same thing someday. Information processing is just a matter of taking inputs, transforming it in some way, and producing an outputs. I can't think of any sort of information transformation a biological circuit can do (again, in principle) that an electrical one couldn't. The difference is the scale and magnitude of the biological circuitry compared to the laughably simple digitized neural networks running on today's machine learning software. Even so, this is a difference of scale not a difference of kind.

If there is more going on in the brain than just information processing (e.g., a soul or something, which like you, I tend not to believe), then biology and computers may be fundamentally in different domains.

But again, we're speaking in terms of theoretical domain, not practical or current domain. I absolutely agree that we have no idea how the brain really works and can't hope to truly simulate it at the current time.

One thing of which I'm sure - to compare current computer architecture with the human brain is a mistake.

I half agree here. If you mean to compare current ML and NN implementations to the brain is a mistake, I agree fully. What we have now are very rudimentary systems that require vast training processes which have no corollary in real humans.

But if by architecture you just mean CPUs, networks, RAM, etc... then I actually disagree. I still think it is just a matter of scale, speed, and complexity. In principle (IMHO), our current hardware could run the same I/O circuits as a given human brain IF we knew way more than we do about how the brain works so that we could develop the ML software. It would be uselessly slow, though. Architecture improvements would certainly be necessary to make this practical (and are already coming... NVIDIA has a whole ML line of chips now) but still... problems of scale, not kind.

Curious to hear your take on all that.
 
Last edited:
Why can people laugh or cry from reading a passage in a book? How does reading some symbols trigger these supposed chemical reactions?

Because emotions are cognitive. They are subjective. A novice and an experienced diver standing on the edge of a sea cliff feel different emotions because they're comprehending their circumstances from their own perspectives, i.e. making their own value judgements, not a universal one.
 
The reality is that the human brain is the most complex thing that we know of in the universe and to think that we can understand it or explain it, at the moment, is futile.

My personal feeling about this is that such a position is false, and no different to the "we'll never understand it" solution to the 'problem' of consciousness.

I think a general description - which is all we need to get us going - is close. Perhaps very close. Let's not forget, it's only 120 or so years since Freud's game-changing discovery of the unconscious. For the previous 80,000 years or more people though the contents of their conscious minds was everything. So there's much to do, but much already done.
 
My personal feeling about this is that such a position is false, and no different to the "we'll never understand it" solution to the 'problem' of consciousness.

I think a general description - which is all we need to get us going - is close. Perhaps very close. Let's not forget, it's only 120 or so years since Freud's game-changing discovery of the unconscious. For the previous 80,000 years or more people though the contents of their conscious minds was everything. So there's much to do, but much already done.

Whilst I agree very much with your first statement in principle...

Psychologists were discussing the unconsciousness well before Freud got his hands on it, so I wouldn't say he 'discovered' it.

Yes, he certainly proposed a ground-breaking model of how the mind works for the time...yet I believe a great deal of his hypotheses have been found to lack any evidence whatsoever, and although he remains very much in the public mind of what psychology is and how we spods think our minds work* - after all sex does sell - I'm not sure it was, with hindsight, too much of a step forward.

(I, being a spod in this field of knowledge, am more than willing to be corrected on this point!)

So I am much more cautious on what we may achieve going forward. Partly is it the difficultly of getting hard evidence to demonstrate hypotheses - although certainly technology is fast increasing the levels of probing and precision we can achieve at looking at a brain. So I may, in all likelihood, be too pessimistic on that front.

But also I feel that our whole philosophical underpinnings of what consciousness are still on very shaky ground. I feel we just don't seem to have a grasp of the nature of subjective consciousness that feels 'whole and intuitive'. An example that gives me a similar feel is our understanding of infinite sets, the nature of infinity itself giving us paradoxes and, at least for me, a strong sense of incompleteness on a very deep level.

------------------------------------------------------------------------
* For example, I feel comfortable using the term subconscious, and make assumptions about it, it really comes mostly from the 'the unknown'.
 
My personal feeling about this is that such a position is false, and no different to the "we'll never understand it" solution to the 'problem' of consciousness.

I think a general description - which is all we need to get us going - is close. Perhaps very close. Let's not forget, it's only 120 or so years since Freud's game-changing discovery of the unconscious. For the previous 80,000 years or more people though the contents of their conscious minds was everything. So there's much to do, but much already done.

I agree, Stephen. That's why I said 'at the moment'.
 
This is a good point, but since we are speaking in terms of theoretical boundaries, all we really need to decide is if there is more going on in the brain than information processing. If the answer is no, then while we may not understand exactly what those biological circuits are doing for decades or centuries to come, we can conclude that, in principle, a computer can do the same thing someday. Information processing is just a matter of taking inputs, transforming it in some way, and producing an outputs. I can't think of any sort of information transformation a biological circuit can do (again, in principle) that an electrical one couldn't. The difference is the scale and magnitude of the biological circuitry compared to the laughably simple digitized neural networks running on today's machine learning software. Even so, this is a difference of scale not a difference of kind.

If there is more going on in the brain than just information processing (e.g., a soul or something, which like you, I tend not to believe), then biology and computers may be fundamentally in different domains.

But again, we're speaking in terms of theoretical domain, not practical or current domain. I absolutely agree that we have no idea how the brain really works and can't hope to truly simulate it at the current time.



I half agree here. If you mean to compare current ML and NN implementations to the brain is a mistake, I agree fully. What we have now are very rudimentary systems that require vast training processes which have no corollary in real humans.

But if by architecture you just mean CPUs, networks, RAM, etc... then I actually disagree. I still think it is just a matter of scale, speed, and complexity. In principle (IMHO), our current hardware could run the same I/O circuits as a given human brain IF we knew way more than we do about how the brain works so that we could develop the ML software. It would be uselessly slow, though. Architecture improvements would certainly be necessary to make this practical (and are already coming... NVIDIA has a whole ML line of chips now) but still... problems of scale, not kind.

Curious to hear your take on all that.

Thanks for a couple of interesting posts, zmunkz. Sorry to have taken so long to reply.

I think, that although both computers and brains are doing information processing that doesn't mean to say they are doing it in the same way and, maybe, the different way they are doing it may lead to side effects such as consciousness and emotions.

An element that the human brain has to deal with, that a computer doesn't, is survival of the self. Honed by natural selection this may play a part in the development of both consciousness and emotions (and many other things).
 
Yes, he certainly proposed a ground-breaking model of how the mind works for the time...yet I believe a great deal of his hypotheses have been found to lack any evidence whatsoever...

Just for the record, I think he was way, way off for 99% of his theories. A man of his time for sure.

His "discovery" to put it in quotation marks was the first time the significance of said organisation of the mind was seriously dealt with. Definitely a game-changer i m o.
 

Similar threads


Back
Top