Terminator is never going to happen

Justin Swanton

Loving the view from up here.
Supporter
Joined
Aug 18, 2015
Messages
927
Location
Durban, South Africa
One thing one needs to keep in mind when talking about AI is that a computer does not and cannot 'think' as humans do. It's not really intelligent.

A computer CPU executes a very large number of mathematical calculations - or rather, a simulation of mathematical calculations - in binary code. No matter how powerful the CPU, or no matter how many CPUs are linked together, the computer or computer always remains at the level of simulated mathematics. It cannot rise one milllimetre towards true thinking.

True thinking means grasping abstract concepts. We examine a number of diverse objects and extract from them something non-material (and non-mathematical) that they have in common. So after looking at a collection of green living things, we abstract the concept of 'tree'. These things that physically may look quite dissimilar all have something in common - a nature, itself not reductible to physical phenomena. They are trees.

With the exception of names and proper nouns (and not even them really), every word in English expresses an abstract concept, something that itself is not physical but is possessed in common by physical entities. Abstract concepts extend to every part of our understanding of the universe: 'beautiful, 'good', 'evil', 'useful', 'expendable', and so on. A computer does not begin to comprehend them. It just performs mechanical simulations of mathematical calculations. It doesn't even understand the maths it does. We understand the truth behind the affirmation that 2 + 2 = 4. A computer is just programmed to produce a mechanical simulation of that calculation.

Since computers can't think they can't make decisions based on thinking. They can't, for example, conclude that the human race is a blot on creation and decide to exterminate it. They can't actually make decisions at all. They have no free will. Their 'decisions' are simply the end result of preprogrammed calculations. If they get things wrong, blame the humans that programmed them. They're just tools really.

(I copied this post from another thread as it seems interesting enough to have a thread of its own)
 
Maybe the terminator will never exist but perhaps something like Nomad in Star Trek might? It's programme was corrupted and it began to 'sterilise' races it came across. So perhaps machines won't consciously set out to destroy us but may destroy us inadvertantly.
 
Computers can indeed make decisions. Perhaps not based on "thinking" but in response to stimuli.
For example, cars can autonomously apply the brakes if something is in its way and can turn the wheels if you stray out of your lane. It will do this even if that is not exactly what you want to do for whatever Human based reason you might have. No thinking is involved, just reaction.
In the Terminator series, Skynet was programmed to respond to nuclear threats without Human input because that would be too slow. The more variables you add and the more complex the scenario the computer is asked to solve, the more chance there is to make a decision a Human would never make.
As we give AI more ability to make decisions for us in the name of convenience we give up some self determination. There is little doubt that machines will give us unintended consequences as we ask them to do more.
Terminator cannot happen NOW, but in 50 years I would not be so sure. So I disagree with the OP premise. "Thinking" does not have to be part of the Terminator universe. They are just reacting as they were programmed after evolving from a self learning neural net.
 
Oh groan... sorry needed to get that out of my system first... I could write an extensive essay on this subject, but I'll limit myself to the future is going to be stranger than you good people think on this subject. If you want a small inkling, suggest you people read the free part on Amazon of my short story, Agents of Repair. But bottom line is Terminator isn't going to happen the way you a lot of people think (big destruction or insidious takeover) for very sound reasons.
 
Technically it seems the terminator itself was not the AI in the equation.(it used 8080 processors or some such and we know that is pre AI;))
It was a robot programmed to kill--to target specifically humans and specific humans in the one situation.
Granting that Skynet was the AI that does not preclude that Andrew Skynet the human engineer won't go on some insane jape to destroy humanity with his mobile robotic terminator.

True Skynet won't happen.
However robots will already kill humans if they invade their work space without deactivating the robot.
 
Given the plots of the recent Terminator movies, it doesn't seem like Skynet does much thinking at all.
Are the directors even human? They don't seem to be thinking much either.
 
Given the plots of the recent Terminator movies, it doesn't seem like Skynet does much thinking at all.
Are the directors even human? They don't seem to be thinking much either.

The directors. producers and writers don't know what to do with the concept of skynet.
 
Given the plots of the recent Terminator movies, it doesn't seem like Skynet does much thinking at all.

The problem is the same as most alien invasion movies: Skynet or the aliens would be so much more capable than humans that the heroes stand no chance unless the bad guys are deliberately written to be stupid.
 
Back on the original topic, the whole concept of 'thinking' seems to be pretty dubious. For example, one of Daniel Dennett's books had a section on experiments which showed that people typically 'thought' about moving their limbs after the signal to move the limbs had already been sent from the brain. 'Thinking' seems to be more of an attempt to rationalize behaviour that's already taken place than a cause of that behaviour.

Nor is it particularly helpful. If you met two identical twins who actes the same way, but someone told you that one of them 'thinks' and the other doesn't, what difference would it make?
 
One thing one needs to keep in mind when talking about AI is that a computer does not and cannot 'think' as humans do. It's not really intelligent.

Arguments that start with a declaration of an unsupported assertion are going to have dubious conclusions. "Cannot" immediately raises a red flag. OF course, it's possible to weasel the statement by pointing out "as humans do", which might be defensible but is irrelevant. Or by specifying that you're talking about computers as they exist right now (binary logic, etc). Also irrelevant.

The problem with declaring that computers will never be able to think (which is really what you're saying) is that it implies there is something about organic life that is unique and special and that exists beyond or outside the physical laws of the universe. Essentially, some version of a "soul", without getting into any specific belief system. The problem with this declaration is that A) there's no evidence of such a thing, and B) there's plenty of evidence against it. Start with Phineas Gage, read any number of books by Oliver Sacks, or for that matter any number of medical journals, and it becomes immediately obvious that the mind exists entirely within and is completely dependent on the physical brain.

To head off the obvious retort, yes consciousness is an emergent phenomenon. So what? That doesn't preclude a silicon (or other artificial) version. Technology isn't there yet (emphasis on yet) but it hasn't stalled, either. Very recently, someone developed an artificial neuron that behaves like a natural one, in that it takes multiple inputs and generates multiple possible outputs based on those inputs, and is trainable (i.e. it learns and remembers). At some point, I believe they'll put together something that learns, is curious, can make inferences, and has opinions--and all this without those attributes having been specifically programmed in. At that point, saying it doesn't 'think' is going to sound a lot like a True Scotsman argument -- thinking is defined as something only organics do, therefore it isn't really 'thinking'

Just my 2 cents.
 
It's fairly clear that several posters do not understand what is meant by an 'abstract concept.' The human mind thinks in abstractions and the point about abstractions is that they are not reducible to anything physical. They are not the same as a memory, a feeling, a visual or audio impression, or any combination thereof, although they can certainly carry sensory impressions or emotions with them. 'Justice' for example can carry the emotion of revenge with it, but the concept of justice is not itself reductible to vengeful feelings.

(to prove the point, look at every word in preceding paragraph: not a single one stands for any particular, concrete object or sensory impression; they are all generalisations, indicating realities that in the real world can have very different physical or sensory characteristics)

Since abstractions can't be equated with anything material they can't, in consequence, be produced by a material process. Since they can't be produced by a material process they can't be produced by a CPU, or any number of CPUs linked in array. Hence computers can't think.

As a final conclusion, the existence of abstract thought points of a part of human nature that is not material or biological, capable of producing abstract thinking. This, for Aristotle, was the proof of the existence of the animos, the immaterial component of a human being that is conjoined to the body but is not identified with it.

@Edward M Grant: humans are capable of instinctive reactions that bypass abstract thinking and function perfectly well without it.

@Dennis E Taylor: we need to be clear about what 'consciousness' is. The word is very woolly, and one can argue that any living organism has consciousness to a degree. I suggest simply dropping the word for this discussion and concentrating on human thinking, which is something that does set us apart from all other species.
 
Last edited:
It's fairly clear that several posters do not understand what is meant by an 'abstract concept.' The human mind thinks in abstractions and the point about abstractions is that they are not reducible to anything physical.

So do computers. In fact, it's pretty much impossible to tell how a complex neural network makes decisions. Stuff goes in, stuff comes out, what happens in between is a mass of calculations that bear no resemblance to anything a human programmer would tell it to do.

Yet, somehow, it figures out the abstract concept of a 'face' and replaces the pr0n actor's face with Emma Watson's.

@Edward M Grant: humans are capable of instinctive reactions that bypass abstract thinking and function perfectly well without it.

This was nothing to do with 'instinctive reactions'. I forget the precise details of the experiment, but somehow they measured both the time the person claimed they were about to raise their arm (or move some other limb, whatever it was) and the time the nerves sent the signal to the limb to move. And the signal was sent before the person claimed they were going to move their arm.

So the supposed 'thought' came after the brain signalled the arm to move, not before.
 
Since abstractions can't be equated with anything material they can't, in consequence, be produced by a material process. Since they can't be produced by a material process they can't be produced by a CPU, or any number of CPUs linked in array. Hence computers can't think.

This is a complete non-sequitur. Here, let me rephrase it for you:

Since abstractions can't be equated with anything material they can't, in consequence, be produced by a material process. Since they can't be produced by a material process they can't be produced by a neuron, or any number of neurons linked in a brain. Hence people can't think.

I'm going to assume for the sake of argument that you disagree with this version. And yet it uses the same logic.

As I alluded to earlier, consciousness (or whatever you want to call it) is an emergent phenomenon. As such, it is not dependant on the substrate on which it occurs. This is actually covered in some depth in Exploring Metaphysics by David K Johnson. One of the examples he uses is: what if aliens visit us tomorrow, and they are silicon-based, or have some other structure that doesn't match up with our brains. Will we declare them to be non-sentient? How do you suppose that will go over? I'm guessing, "not well."
The point is that declaring a priori that any given substrate won't produce consciousness without having some form of supporting proof is simply an unsupported assertion, and no more credible than Aristotelian physics.


@Dennis E Taylor: we need to be clear about what 'consciousness' is. The word is very woolly, and one can argue that any living organism has consciousness to a degree. I suggest simply dropping the word for this discussion and concentrating on human thinking, which is something that does set us apart from all other species.

This is a problem, again as I mentioned earlier, in that you are applying a No True Scotsman argument: any consciousness other than human isn't true consciousness because it isn't like human consciousness. It's a true statement, but a tautology.

Again, referring to my hypothetical visiting aliens: I will bet real money (and give you odds) that any visiting aliens will have a form of consciousness that is different from ours. Using your criterion, they are therefore not truly conscious because their consciousness is different.

The fundamental problem, of course, is (as you said) the definition of consciousness is 'woolly'. And I will also bet real money that, if we end up developing A.I. that is arguably conscious, there will be a lot of argument, resistance, and court activity before it's recognized as such.

I think a far more interesting argument would be about what forms of proof, in principle, would be sufficient to accept an A.I. as conscious and therefore deserving of rights.
 
This is a complete non-sequitur. Here, let me rephrase it for you:

Since abstractions can't be equated with anything material they can't, in consequence, be produced by a material process. Since they can't be produced by a material process they can't be produced by a neuron, or any number of neurons linked in a brain. Hence people can't think.

Your reasoning is watertight (except for the conclusion), which means that neurons can't produce abstract concepts, hence the component of human nature that does produce abstract concept aren't neurons. That's the point of Aristotle: the animos isn't the brain. What the brain does is supply the raw material from which the animos (if I say 'soul' I'll be pigeonholed as a religious fundamentalist) extracts the concepts.

When a baby is born it doesn't have a single thought/concept in its head ('head' in the figurative sense). It looks at the shapes around it. The eyes supply images of those shapes to the brain. Once enough images have been collected into the memory the animos is able to abstract concepts that these images have in common: "tree", "birds", and so on. And so thinking begins. Concepts are compared and further concepts deduced from them by the animos. The neurons' part is to supply new images and keep older images in storage for future examination.

PS: as an attempt to keep this away from the topic of religion (which risks getting personal), the demonstration of the existence of the animos can be made without any immediate reference to a God. One could be content just with a universe that has more to it than atomic or subatomic structure. A good SF theme there.
 
Last edited:
This is a problem, again as I mentioned earlier, in that you are applying a No True Scotsman argument: any consciousness other than human isn't true consciousness because it isn't like human consciousness. It's a true statement, but a tautology.

Not quite. I'm saying that consciousness isn't the same as thinking, and we're talking about thinking here.
 
No true Scotsman puts sugar on his porridge!

It depends on the context.

Thought refers to ideas or arrangements of ideas that are the result of the process of thinking. Though thinking is an activity considered to be embodied in humanity, there is no consensus as to how it is ultimately defined or understood.

Consciousness includes both thinking and awareness. It may be that Justin is trying to point out that computers may "think", but are not necessarily aware that they are thinking. This of course goes back to the famous line by René Descartes "Cogito ergo sum", which if you carefully examine the whole of Descartes' argument, what he actually says is "I can deny that I am thinking, but I cannot deny that I am denying my thought". From the Wiki article: A fuller form, penned by Antoine Léonard Thomas, aptly captures Descartes’s intent: dubito, ergo cogito, ergo sum ("I doubt, therefore I think, therefore I am").

For the most part, humans think in relation to their environment, and the stimulus received therefrom. After all, we are immersed in the physical world from day one (as has been stated). Computers are not limited in their capacity to "think" by environmental constraints, but they are more limited in terms of stimulus. Still, computers can be connected to the same environment that humans are by sensors. In that regard, they could potentially be more aware of their environment than we are. But the question remains, Do computers know that they are thinking? (Do Androids dream of electric sheep?)

Current day computers can be programmed to mimic humans in just about any way you can imagine. And the truth of the statement that computers do what humans programmed them to do is fairly incontrovertible, even if the programmer did not consider every possible action that results from their programming. So by simple cause and effect, we could mistakenly create a terminator, or a kind of AI that could result in that scenario. But it is highly unlikely, and I'm surprised no one has brought it up - we have access to the machines' power source.

To say that "Terminator is never going to happen" is a rather bold statement, and I would call it an open-ended argument since it can't be proven wrong until it does happen.

This is kind of "right up my alley", as I am a computer scientist and programmer. It's what I do for a living.
 
Back on the original topic, the whole concept of 'thinking' seems to be pretty dubious. For example, one of Daniel Dennett's books had a section on experiments which showed that people typically 'thought' about moving their limbs after the signal to move the limbs had already been sent from the brain. 'Thinking' seems to be more of an attempt to rationalize behaviour that's already taken place than a cause of that behaviour.

... although of course since different signals from different senses reach the brain at different times, it would be very surprising if "thought" was perceived at the same time as the signals.
 

Similar threads


Back
Top