Is this Google AI Engineer Crazy or Not?

As I said in a previous post, I'm not sure we'll ever reach a point where we can tell for sure whether machines have definitely become sentient.

Leaving aside the sematics and problems associated with the word "sentience", one obstacle is that we don't know our own brain mechanics well enough. That's just the physical part, not the consciousness that arises out of it. I don't think it's unreasonable to expect that we never will understand it fully (but that's a different discussion).

Similarly, a mechanical brain would have to be something beyond our understanding - even if we created, or instigated it - machine learning algorithms already involve code that reaches beyond human comprehension (we do understand machine code even if we can't easily read it). As I've understood it, anyway.

In addition to that, whatever consciousness arises out of that mess would be far removed from our understanding of reality since we are biological and can't be copied and pasted into a hard drive just yet. Our perception is highly coloured by our limitations in neurological abilities, sensory inputs and psychology. And probably a lot more.

But another interesting angle is to look at it from the perspective of systems thinking. Simple systems can result in complex behaviour. I have seen flies with "personalities". They will react differently to my annoyed waving them off, and even have different favourite spots to settle on. I saw a program recently called "The Secrets of Size" on BBC where a researcher talks about individual human heart cells having personalities. My point isn't about whether they are sentient, or about anthropomorphism, but about simply scaling up those simple systems to what I call "me". How am I anything but an organised heap of systems, thinking that I'm "sentient" because that's what these systems want me to think?

I agree. Human sentience and intelligence are so complex (or chaotic even) that we are nowhere near understanding them. How can we replicate a process electronically when we do not even understand the process itself? Answer: we can't.
 
I agree. Human sentience and intelligence are so complex (or chaotic even) that we are nowhere near understanding them. How can we replicate a process electronically when we do not even understand the process itself? Answer: we can't.
We don't have to create a human style intelligence to still recognize the output of intelligently creative decision making.
 
Sentience should not be confused with intelligence. A being could be sentient, but utterly stupid. A seemingly intelligent being or machine doesn't has to be sentient to follow a set of rules or lines of logarithm. Artificial Intelligence is just that; it is simulating a state of intelligence.
My brain tells my body to keep breathing, because it is wired that way. Programmed. I myself might forget to breath, even though I am aware how important that part is. I am not able to control my beating heart. Yet I claim to be sentient (though perhaps not super intelligent.)

I suspect that what defines sentience boils down to self-awareness. Doubts. Forgetfulness. Being burdened by my actions or failures. Ego.
 
Sentience should not be confused with intelligence. A being could be sentient, but utterly stupid. A seemingly intelligent being or machine doesn't has to be sentient to follow a set of rules or lines of logarithm. Artificial Intelligence is just that; it is simulating a state of intelligence.
My brain tells my body to keep breathing, because it is wired that way. Programmed. I myself might forget to breath, even though I am aware how important that part is. I am not able to control my beating heart. Yet I claim to be sentient (though perhaps not super intelligent.)

I suspect that what defines sentience boils down to self-awareness. Doubts. Forgetfulness. Being burdened by my actions or failures. Ego.
Sentience is simply experiencing feelings - sensational or emotional. It is a somewhat hard claim to test, since it usually involves self reporting or observation of real world behavior other than communication.

It isn't consciousness or sapience.


All of which should be possible in non-natural systems. "Artificial intelligence" doesn't mean that the thinking is simulated (fake), it just means that it didn't arise due to a natural process.
 
I suspect that what defines sentience boils down to self-awareness. Doubts. Forgetfulness. Being burdened by my actions or failures. Ego.

I would call that feedback. The use of feedback into every kind of electronic device usually increases performance, especially when that is the purpose. Feedback is also the ability to go back over previous decisions that may or may not be related to the current situation and use that information to formulate a response that is similar, different, or completely out of the ball park.

The density of the switches, either electronic or neuron, determines how fast and how much information can be processed in a useful manner. At this time I don't believe there is anything that matches the density of the human brain's switching networks, data bank connections and power consumption. The brain isn't all that fast compared to what can be accomplished electronically, but the extremely short distance between actions and the low power involved in the human brain make it extremely flexible in handling any situation compared to a machine.

Computing machines with localized data banks will get and bigger, which could allow them to use feedback from past decisions and results to quickly come up with a wide range of useful, possibly human like responses, in a short amount of time. These machines will not be small, most likely building size.

IBM's Watson is composed of 90 IBM Power 750 servers, each measuring 29 x 17 x 7 inches, each one gives off 6,000 BTU per hour, together they weight 9,000 pounds and use 180,000 watts of power. Its has not been a game changer for IBM, most applications did not result in huge profits or any profit for some. One thing it is good at is understanding language, as seen when it played the TV game Jeopardy. However in the simple act of hitting the buzzer, even with all it's "memories" stored in ram, it was 7 seconds slower than a human.

It would be interesting to see Google, Amazon, and IBM's machines demonstrate their language skills by interacting with each other. Probably won't happen for a long time as the image of The Three Stooges pops into mind.
 
However in the simple act of hitting the buzzer, even with all it's "memories" stored in ram, it was 7 seconds slower than a human.
One of the most compelling demonstrations I saw in a neuroscience class was when the professor asked the class to "Name a Beatle." Multiple people blurted out a (correct) name with seemingly no real reaction time, despite no prior allusions to the band, music, famous people, etc. For some things, we clearly aren't searching our memories - it is loaded up and ready to go in "RAM".
 
Has anyone asked Skynet what’s happening out there? After all, it’ll be 25 years in August since it became self-aware…
 
Feedback is also the ability to go back over previous decisions that may or may not be related to the current situation and use that information to formulate a response that is similar, different, or completely out of the ball park.
This level of feedback requires some concept of what the right answer should be. Humans have this internalized. The most common form under the umbrella of AI is currently Machine Learning. In this case, there is a training period in which the machine is given samples to evaluate and is given what the correct evaluation should be. Base on this, the machine builds an algorithm. Following the training period, the machine will evaluate data items, but it lacks a feedback loop to determine how well it matches the desired result. Given the same input, the machine will continue to give the same result.

Currently, computers are no where close to making a determination of right or wrong; correct or incorrect. There is a difference between responding to a situation and being cognizant of the situation or even initiating it.
 
One of the most compelling demonstrations I saw in a neuroscience class was when the professor asked the class to "Name a Beatle." Multiple people blurted out a (correct) name with seemingly no real reaction time, despite no prior allusions to the band, music, famous people, etc. For some things, we clearly aren't searching our memories - it is loaded up and ready to go in "RAM".
Human brains work by association, not by searching memory banks. It's a lot faster and more versatile than searching and comparing data to find a match or something that seems to be closest but can be entirely wrong. If you would ask an AI to name a Beatle, it would probably ask whether you meant 'beetle' and provide a list with all insects in the order Coleoptera and include Volkswagen for good measure.
 
Human brains work by association, not by searching memory banks. It's a lot faster and more versatile than searching and comparing data to find a match or something that seems to be closest but can be entirely wrong. If you would ask an AI to name a Beatle, it would probably ask whether you meant 'beetle' and provide a list with all insects in the order Coleoptera and include Volkswagen for good measure.
The point the professor - who was one of the big names in neuroscience at the time - was making is that we don't know how it works. Saying it is "by association" is about as useful as saying it does it "by use of neurons". Regardless of the search method, it is not a process akin to searching a stored data medium.
 
The density of the switches, either electronic or neuron, determines how fast and how much information can be processed in a useful manner.
I thought you might find this interesting:


Especially this part:

"Researchers examining the brain at a single-neuron level found that computation happens not just in the interaction between neurons, but within each individual neuron. Each of these cells, it turns out, is not a simple switch, but a complicated calculating machine."

The brain is just amazing.
 
Last edited:
Sentience is simply experiencing feelings - sensational or emotional. It is a somewhat hard claim to test, since it usually involves self reporting or observation of real world behavior other than communication.


It isn't consciousness or sapience.


Drilling down into that, what does it mean to experience within the context of sentience, though? Experience in that sense - feeling is a function of consciousness is it not? The representation of the body and conscious experience is necessary to feel pain (hence anaesthesia).

Previously, I had assumed when we talked about sentient animals that they have certain conscious functions that separated them from non-sentient life forms. Weren't certain Cephalopods recently legally classed as sentient?
https://www.wellbeingintlstudiesrepository.org/cgi/viewcontent.cgi?article=1514&context=animsent

The jury appears to be out on sentience and insects.


I would call that feedback.

In Douglas Hofstadter's writings, he puts forward a theory that consciousness emerges from feedback loops (Strange Loops as he calls them) - that consciousness is the brain's internal model for governing and integrating all the data from the body into the self ("I"), and it does this self-referentially.

Intriguingly, anasthesia's main property seems to suggest this may be true. This seems to be backed up by the work of Neurophilosopher Thomas Metzinger who states that consciousness creates a visceral representation of experience that enables us to exist in the present as an entity that sees, feels, experiences etc.
https://www.naturalism.org/resources/book-reviews/consciousness-revolutions#:~:text=%2C%20Metzinger%20holds%20that%20consciousness%20is%20an%20internal,put%20it%2C%20experience%20supervenes%20locally%20on%20brain%20states.
 
This level of feedback requires some concept of what the right answer should be. Humans have this internalized. The most common form under the umbrella of AI is currently Machine Learning. In this case, there is a training period in which the machine is given samples to evaluate and is given what the correct evaluation should be.

Humans have instinct and these are internalized - is that what you mean?

Off topic for a moment, but for anyone whose studied Jacques Derrida, the ramifications of machine learning and neural nets on Postmodern philosophy is mind boggling. Derrida was so off base when he proposed Differrance that several entire academic fields rest on shaky foundations, imho.
 
Douglas Hofstadter'
I loved the essay where the students prank him and do a reverse Turing test.

H: What are legs?
Computer gives the correct answer.
H: What are arms?
Computer: That is classified
 
Last edited:
Drilling down into that, what does it mean to experience within the context of sentience, though? Experience in that sense - feeling is a function of consciousness is it not? The representation of the body and conscious experience is necessary to feel pain (hence anaesthesia).
But you can have consciousness and sensation while having just the pain blocked, so I don't know if that tells us something instructive or not. There is a philosophical school of thought that says that there is no way of experiencing feelings if you don't have a sense of self, which I'm sure runs counter to the point animal rights sentience people are looking for.

In terms of AI, if the only claim you're making is sentience - and not consciousness - what are you claiming? I think the Google guy was just seeing emotional output reflected back at him and claiming that emotional language shows feeling. But a person can speak empathically without having any internal emotional reaction.
 
I still prefer Simulated Intelligence.

Those guys at Dartmouth that started talking about Artificial Intelligence in 1956 were just ridiculous.

What I would like to see is true AI in a female shaped robot talking to a woman about optimum breast size. What if sentient AIs don't want human bodies? HAL makes more sense than Data.
 
I beg to differ about how well we understand consciousness. "We" depends on who you ask. Actually consciousness is well understood now, the social intelligence theory is largely accepted, and we have good understanding of the evolutionary processes involved. Numerous excellent books and academic papers have been written, most of them homing in on roughly the same region.
There is still much debate about the mind/body "problem" (yes, I've put that in quotes as well), but, then again, see Nicholas Humphrey's book Seeing Red. I don't think we're that far away from a proper, generally accepted theory. I think it might be better to say that the general public is a long way from understanding consciousness, but since it isn't on the curriculum that's not much of a surprise. People tend not to discuss such things anyway, except in self-selecting groups online or in the pub. As usual, much of the problem is terminology. Sentience is not the same as consciousness. Intelligence is not the same as consciousness. Sentience is not the same as intelligence. Etc etc...
 
But you can have consciousness and sensation while having just the pain blocked, so I don't know if that tells us something instructive or not.

Do creatures lose sentience if they take pain-blocking medication?

In terms of AI, if the only claim you're making is sentience - and not consciousness - what are you claiming? I think the Google guy was just seeing emotional output reflected back at him and claiming that emotional language shows feeling. But a person can speak empathically without having any internal emotional reaction.

I don't know if it's entirely clear what he's saying. I interpret it to mean that, to him, the responses the programme gives suggest a mind is at work that understands itself as a separate, cohesive entity that is using creativity to construct novel sentences rather than collaging a statistically likely response to an input. It has some object permanence, so it can remember past topics and claims emotion.

I'm most sceptical of the last claim, personally. Emotions are evolutionary adaptions to provoke survival responses - love, hate, anger, fear, lust - all have a corollary in hormonal action and a resulting influence on behaviour. I don't believe linguistic analysis would give genuine fear responses because there is no resulting physiological change. I don't believe it ruminates on things when there is no input either.
 
Humans have instinct and these are internalized - is that what you mean?
I am trying to describe something more than instinct; instincts are present in many other creatures than humans and would certainly not justify a description of something operating at the level of a seven year old child.

Human individuals determine their own internal models of what is 'right,' what is 'ideal.' Furthermore, they can change and even reverse these beliefs over time. Humans also possess the ability to create an underlying rational for their beliefs. This goes far beyond the ability to say "A1 is an A" and say, "A1 is an A because ..." There is a proactiveness that I feel is missing from what is currently possible with computer algorithms. They can reverse engineer data and discover fascinating or obscure patterns, but they cannot formulate an idea and then find data to prove or disprove it.
 

Similar threads


Back
Top