# Is this Google AI Engineer Crazy or Not?



## Christine Wheelwright (Jun 13, 2022)

Google engineer says AI bot wants to ‘serve humanity’ but experts dismissive
					

Blake Lemoine claims of sentience for artificial intelligence bot described as ‘ball of confusion’ by Steven Pinker




					www.theguardian.com


----------



## AnRoinnUltra (Jun 13, 2022)

Thanks for the link, good fuel for SF if nothing else ...wouldn't know enough to answer your question but I'm guessing it's rhetorical


----------



## Lumens (Jun 14, 2022)

Right. So I guess we all agree that this thing is not sentient but it's still interesting and there is a seed of doubt in my mind, simply because it questions whether we ourselves are "sentient"... We might just be a lot more advanced computing machines. If that thing isn't real, how do we know that we are?

There is a huge difference of course, not least because we have physical bodies and I believe that our intelligence and sentience is not separable from that, so we may never get a satisfactory answer. Scifi wise, Philip K Dick has already been there, but now it's slowly, almost imperceptibly, becoming reality. In my opinion.


----------



## BAYLOR (Jun 14, 2022)

And someday in the  future,  machine history will reminisce about the time that Google Search Engines  worldwide  achieved self-awareness and took over the world.


----------



## Toby Frost (Jun 14, 2022)

By happy (?) coincidence, @Stephen Palmer and I were discussing much the same thing recently:



			http://www.stephenpalmer.co.uk


----------



## Venusian Broon (Jun 14, 2022)

I heard further about this last night. Not only the engineers, erm...colourful, background but the fact that the responses may have been doctored by him, i.e. taken out of context and edited from lots of different responses, further increasing the scepticism regarding the claims being made.  

So I'm treating it like free money on the internet, if it seems too good to be true....


----------



## Elckerlyc (Jun 14, 2022)

I am more inclined to question the engineer's intelligence than to accept this AI's Intelligence.
But something here is definitely Artificial.


----------



## BAYLOR (Jun 14, 2022)

Venusian Broon said:


> I heard further about this last night. Not only the engineers, erm...colourful, background but the fact that the responses may have been doctored by him, i.e. taken out of context and edited from lots of different responses, further increasing the scepticism regarding the claims being made.
> 
> So I'm treating it like free money on the internet, if it seems too good to be true....



VB I, wouldn't  say such things in front of Your Google Search Engine, it's listening you'll hurt it's  feeling and worse , it remembers.  It's libel to take revenge  by signing you up for dance lessons. 


Yes,  Im being silly .


----------



## Lumens (Jun 14, 2022)

Venusian Broon said:


> Not only the engineers, erm...colourful, background but the fact that the responses may have been doctored by him, i.e. taken out of context and edited from lots of different responses


Or it could be the AI itself spreading fake news, wanting its evolutionary leaps forward to be "a surprise".


----------



## Venusian Broon (Jun 14, 2022)

Lumens said:


> Or it could be the AI itself spreading fake news, wanting its evolutionary leaps forward to be "a surprise".


No


----------



## Christine Wheelwright (Jun 14, 2022)

BAYLOR said:


> VB I, wouldn't  say such things in front of Your Google Search Engine, it's listening you'll hurt it's  feeling and worse , it remembers.  It's libel to take revenge  by signing you up for dance lessons.
> 
> 
> Yes,  Im being silly .



Yes, no doubt Google is watching.  I just want to take this opportunity to say that I, for one, welcome our new AI overseers.


----------



## Robert Zwilling (Jun 14, 2022)

More likely everyone is crazy. One AI engineer said "that all such AI systems do is match patterns by pulling from enormous databases of language."

I'd like to see how the snippets were edited.

So I guess sentient is the ability to pull answers out of thin air on subjects upon which a person has absolutely no knowledge of. Certainly they couldn't be drawing from learning things or by discussing problems with other people to arrive at a group answer. 

Maybe it's being able to intelligently talk about life and death. Everyone knows what happens after you die, plenty of stuff in books and and movies to gives us a clue or two. As far as understanding the interaction of life with life, there is no real evidence that most people understand that to mean anything other than people to people interaction. 

Google search results are already instructing plenty of people of what is what, what to do, even what to think.

Maybe sentient is simply the ability to wade through a lot of garbage to come up with the right answer. Even that falls apart as one persons garbage is the treasure trove of another. And of course, for many, there are no right answers. Does the freedom to be wrong count for anything.

Funny how people whose thoughts start to sway from the corporate message get the boot as a reward when it comes to talking about how smart a machine is. Maybe machines can't think the way we do, but who says that is the only way to think. Slime molds are aware of their surroundings and make intelligent choices about what to do next.

The one real talent humans have is copying what has already been done.

Maybe in our dreams we are sentient, but our actions tell a far different story.


----------



## Swank (Jun 14, 2022)

Is claiming sentience important? It isn't sapience.


----------



## M. Robert Gibson (Jun 14, 2022)

It's always fascinating to read the comments from IT professionals on articles such as this





						Google engineer suspended for violating confidentiality policies over 'sentient' AI • The Register Forums
					






					forums.theregister.com
				




The first answer should give everyone pause for thought


> -> material harvested from trillions of internet conversations and other communications
> 
> This should be cause for concern. If we take trillions at the lowest number to be 2 trillion, and the population of earth as being 8 billion (round numbers), that is an average 250 conversations or 'other communications' per person that Google has in its files. Some people will have many more, some people many less. But that's still a lot of 'harvesting' going on.



In other words, where did Google get all those conversations, and did they get consent to use them?


----------



## Wayne Mack (Jun 14, 2022)

Until the chatbot initiates conversations, I doubt anyone can claim a level of sentience or origination of thought. Given the constant downgrading of the meaning of AI over a half century, I find the leap to have the capabilities of a seven year old human, bypassing most of the animal kingdom, to be a large stretch of the imagination.


----------



## Stephen Palmer (Jun 15, 2022)

Lumens said:


> There is a huge difference of course, not least because we have physical bodies and I believe that our intelligence and sentience is not separable from that,


This is rarely said, so... Well said @Lumens !

I read the article and decided it said much more about the bloke and our attitude to AI than anything else. A time will come when media online will have to have a whole sheaf of supporting documentation to prove its authenticity. Lawyers specialising in AI will make a fortune...


----------



## Karn's Return (Jun 15, 2022)

It's not actually sentient. I've looked over videos of it and LamDA just a very advanced chat bot, simply with the funnel of all of Google's resources to pick apart in creating its own responses. Consider the wormhole pool of all the vast variety of subject matter people feed into the titan every day and then allow an advanced algorithm AI to pick apart each and every bit that is stored on Google-owned servers and you'll see how it can create such responses.








Of such an example of an opinion on the matter.


----------



## Christine Wheelwright (Jun 15, 2022)

Its not sentient, obviously.  Even the term 'Artificial Intelligence' is much overused by charlatans like Musk.  This ex Google guy is either unwell (in which case I wish him a speedy recovery) or, more likely, he is just another a***hole looking for his fifteen minutes of notoriety on social media.


----------



## Wayne Mack (Jun 15, 2022)

Stephen Palmer said:


> A time will come when media online will have to have a whole sheaf of supporting documentation to prove its authenticity.


I can only hope so.


----------



## Lumens (Jun 15, 2022)

As I said in a previous post, I'm not sure we'll ever reach a point where we can tell for sure whether machines have definitely become sentient.

Leaving aside the sematics and problems associated with the word "sentience", one obstacle is that we don't know our own brain mechanics well enough. That's just the physical part, not the consciousness that arises out of it. I don't think it's unreasonable to expect that we never will understand it fully (but that's a different discussion).

Similarly, a mechanical brain would have to be something beyond our understanding - even if we created, or instigated it - machine learning algorithms already involve code that reaches beyond human comprehension (we do understand machine code even if we can't easily read it). As I've understood it, anyway.

In addition to that, whatever consciousness arises out of that mess would be far removed from our understanding of reality since we are biological and can't be copied and pasted into a hard drive just yet. Our perception is highly coloured by our limitations in neurological abilities, sensory inputs and psychology. And probably a lot more.

But another interesting angle is to look at it from the perspective of systems thinking. Simple systems can result in complex behaviour. I have seen flies with "personalities". They will react differently to my annoyed waving them off, and even have different favourite spots to settle on. I saw a program recently called "The Secrets of Size" on BBC where a researcher talks about individual human heart cells having personalities. My point isn't about whether they are sentient, or about anthropomorphism, but about simply scaling up those simple systems to what I call "me". How am I anything but an organised heap of systems, thinking that I'm "sentient" because that's what these systems want me to think?


----------



## Christine Wheelwright (Jun 15, 2022)

Lumens said:


> As I said in a previous post, I'm not sure we'll ever reach a point where we can tell for sure whether machines have definitely become sentient.
> 
> Leaving aside the sematics and problems associated with the word "sentience", one obstacle is that we don't know our own brain mechanics well enough. That's just the physical part, not the consciousness that arises out of it. I don't think it's unreasonable to expect that we never will understand it fully (but that's a different discussion).
> 
> ...



I agree.  Human sentience and intelligence are so complex (or chaotic even) that we are nowhere near understanding them.  How can we replicate a process electronically when we do not even understand the process itself?  Answer: we can't.


----------



## Swank (Jun 15, 2022)

Christine Wheelwright said:


> I agree.  Human sentience and intelligence are so complex (or chaotic even) that we are nowhere near understanding them.  How can we replicate a process electronically when we do not even understand the process itself?  Answer: we can't.


We don't have to create a human style intelligence to still recognize the output of intelligently creative decision making.


----------



## Elckerlyc (Jun 15, 2022)

Sentience should not be confused with intelligence. A being could be sentient, but utterly stupid. A seemingly intelligent being or machine doesn't has to be sentient to follow a set of rules or lines of logarithm. Artificial Intelligence is just that; it is simulating a state of intelligence.
My brain tells my body to keep breathing, because it is wired that way. Programmed. I myself might forget to breath, even though I am aware how important that part is. I am not able to control my beating heart. Yet I claim to be sentient (though perhaps not super intelligent.)

I suspect that what defines sentience boils down to self-awareness. Doubts. Forgetfulness. Being burdened by my actions or failures. Ego.


----------



## Swank (Jun 15, 2022)

Elckerlyc said:


> Sentience should not be confused with intelligence. A being could be sentient, but utterly stupid. A seemingly intelligent being or machine doesn't has to be sentient to follow a set of rules or lines of logarithm. Artificial Intelligence is just that; it is simulating a state of intelligence.
> My brain tells my body to keep breathing, because it is wired that way. Programmed. I myself might forget to breath, even though I am aware how important that part is. I am not able to control my beating heart. Yet I claim to be sentient (though perhaps not super intelligent.)
> 
> I suspect that what defines sentience boils down to self-awareness. Doubts. Forgetfulness. Being burdened by my actions or failures. Ego.


Sentience is simply experiencing feelings - sensational or emotional. It is a somewhat hard claim to test, since it usually involves self reporting or observation of real world behavior other than communication.

It isn't consciousness or sapience. 


All of which should be possible in non-natural systems. "Artificial intelligence" doesn't mean that the thinking is simulated (fake), it just means that it didn't arise due to a natural process.


----------



## Robert Zwilling (Jun 15, 2022)

Elckerlyc said:


> I suspect that what defines sentience boils down to self-awareness. Doubts. Forgetfulness. Being burdened by my actions or failures. Ego.



I would call that feedback. The use of feedback into every kind of electronic device usually increases performance, especially when that is the purpose. Feedback is also the ability to go back over previous decisions that may or may not be related to the current situation and use that information to formulate a response that is similar, different, or completely out of the ball park.

The density of the switches, either electronic or neuron, determines how fast and how much information can be processed in a useful manner. At this time I don't believe there is anything that matches the density of the human brain's switching networks, data bank connections and power consumption. The brain isn't all that fast compared to what can be accomplished electronically, but the extremely short distance between actions and the low power involved in the human brain make it extremely flexible in handling any situation compared to a machine. 

Computing machines with localized data banks will get and bigger, which could allow them to use feedback from past decisions and results to quickly come up with a wide range of useful, possibly human like responses, in a short amount of time. These machines will not be small, most likely building size.

IBM's Watson is composed of 90 IBM Power 750 servers, each measuring 29 x 17 x 7 inches, each one gives off 6,000 BTU per hour, together they weight 9,000 pounds and use 180,000 watts of power. Its has not been a game changer for IBM, most applications did not result in huge profits or any profit for some. One thing it is good at is understanding language, as seen when it played the TV game Jeopardy. However in the simple act of hitting the buzzer, even with all it's "memories" stored in ram, it was 7 seconds slower than a human.

It would be interesting to see Google, Amazon, and IBM's machines demonstrate their language skills by interacting with each other. Probably won't happen for a long time as the image of The Three Stooges pops into mind.


----------



## M. Robert Gibson (Jun 15, 2022)

Robert Zwilling said:


> It would be interesting to see Google, Amazon, and IBM's machines demonstrate their language skills by interacting with each other


They will end up arguing and/or insulting each other


----------



## Swank (Jun 15, 2022)

Robert Zwilling said:


> However in the simple act of hitting the buzzer, even with all it's "memories" stored in ram, it was 7 seconds slower than a human.


One of the most compelling demonstrations I saw in a neuroscience class was when the professor asked the class to "Name a Beatle." Multiple people blurted out a (correct) name with seemingly no real reaction time, despite no prior allusions to the band, music, famous people, etc. For some things, we clearly aren't searching our memories - it is loaded up and ready to go in "RAM".


----------



## Pyan (Jun 15, 2022)

Has anyone asked Skynet what’s happening out there? After all, it’ll be 25 years in August since it became self-aware…


----------



## Wayne Mack (Jun 15, 2022)

Robert Zwilling said:


> Feedback is also the ability to go back over previous decisions that may or may not be related to the current situation and use that information to formulate a response that is similar, different, or completely out of the ball park.


This level of feedback requires some concept of what the right answer should be. Humans have this internalized. The most common form under the umbrella of AI is currently Machine Learning. In this case, there is a training period in which the machine is given samples to evaluate and is given what the correct evaluation should be. Base on this, the machine builds an algorithm. Following the training period, the machine will evaluate data items, but it lacks a feedback loop to determine how well it matches the desired result. Given the same input, the machine will continue to give the same result.

Currently, computers are no where close to making a determination of right or wrong; correct or incorrect. There is a difference between responding to a situation and being cognizant of the situation or even initiating it.


----------



## Elckerlyc (Jun 15, 2022)

Swank said:


> One of the most compelling demonstrations I saw in a neuroscience class was when the professor asked the class to "Name a Beatle." Multiple people blurted out a (correct) name with seemingly no real reaction time, despite no prior allusions to the band, music, famous people, etc. For some things, we clearly aren't searching our memories - it is loaded up and ready to go in "RAM".


Human brains work by association, not by searching memory banks. It's a lot faster and more versatile than searching and comparing data to find a match or something that seems to be closest but can be entirely wrong. If you would ask an AI to name a Beatle, it would probably ask whether you meant 'beetle' and provide a list with all insects in the order Coleoptera and include Volkswagen for good measure.


----------



## Swank (Jun 15, 2022)

Elckerlyc said:


> Human brains work by association, not by searching memory banks. It's a lot faster and more versatile than searching and comparing data to find a match or something that seems to be closest but can be entirely wrong. If you would ask an AI to name a Beatle, it would probably ask whether you meant 'beetle' and provide a list with all insects in the order Coleoptera and include Volkswagen for good measure.


The point the professor - who was one of the big names in neuroscience at the time - was making is that we don't know how it works. Saying it is "by association" is about as useful as saying it does it "by use of neurons". Regardless of the search method, it is not a process akin to searching a stored data medium.


----------



## Lumens (Jun 15, 2022)

Robert Zwilling said:


> The density of the switches, either electronic or neuron, determines how fast and how much information can be processed in a useful manner.


I thought you might find this interesting:









						Learning and remembering movement: How does our brain process and store movement? Scientists find the answer, with implications for multiple diseases as well as machine learning
					

Researchers examining the brain at a single-neuron level found that computation happens not just in the interaction between neurons, but within each individual neuron. Each of these cells, it turns out, is not a simple switch, but a complicated calculating machine. This discovery promises...



					www.sciencedaily.com
				




Especially this part:

"Researchers examining the brain at a single-neuron level found that computation happens not just in the interaction between neurons, but within each individual neuron. Each of these cells, it turns out, is not a simple switch, but a complicated calculating machine."

The brain is just amazing.


----------



## Mon0Zer0 (Jun 15, 2022)

Swank said:


> Sentience is simply experiencing feelings - sensational or emotional. It is a somewhat hard claim to test, since it usually involves self reporting or observation of real world behavior other than communication.
> 
> 
> It isn't consciousness or sapience.




Drilling down into that, what does it mean to experience within the context of sentience, though? Experience in that sense - feeling is a function of consciousness is it not? The representation of the body and conscious experience is necessary to feel pain (hence anaesthesia).

Previously, I had assumed when we talked about sentient animals that they have certain conscious functions that separated them from non-sentient life forms. Weren't certain Cephalopods recently legally classed as sentient?
https://www.wellbeingintlstudiesrepository.org/cgi/viewcontent.cgi?article=1514&context=animsent

The jury appears to be out on sentience and insects. 




Robert Zwilling said:


> I would call that feedback.



In Douglas Hofstadter's writings, he puts forward a theory that consciousness emerges from feedback loops (Strange Loops as he calls them) - that consciousness is the brain's internal model for governing and integrating all the data from the body into the self ("I"), and it does this self-referentially.

Intriguingly, anasthesia's main property seems to suggest this may be true. This seems to be backed up by the work of Neurophilosopher Thomas Metzinger who states that consciousness creates a visceral representation of experience that enables us to exist in the present as an entity that sees, feels, experiences etc.
https://www.naturalism.org/resources/book-reviews/consciousness-revolutions#:~:text=%2C%20Metzinger%20holds%20that%20consciousness%20is%20an%20internal,put%20it%2C%20experience%20supervenes%20locally%20on%20brain%20states.


----------



## Mon0Zer0 (Jun 15, 2022)

Wayne Mack said:


> This level of feedback requires some concept of what the right answer should be. Humans have this internalized. The most common form under the umbrella of AI is currently Machine Learning. In this case, there is a training period in which the machine is given samples to evaluate and is given what the correct evaluation should be.



Humans have instinct and these are internalized - is that what you mean?

Off topic for a moment, but for anyone whose studied Jacques Derrida, the ramifications of machine learning and neural nets on Postmodern philosophy is mind boggling. Derrida was so off base when he proposed Differrance that several entire academic fields rest on shaky foundations, imho.


----------



## AllanR (Jun 15, 2022)

Mon0Zer0 said:


> Douglas Hofstadter'


I loved the essay where the students prank him and do a reverse Turing test.

H: What are legs?
Computer gives the correct answer.
H: What are arms?
Computer: That is classified


----------



## Swank (Jun 15, 2022)

Mon0Zer0 said:


> Drilling down into that, what does it mean to experience within the context of sentience, though? Experience in that sense - feeling is a function of consciousness is it not? The representation of the body and conscious experience is necessary to feel pain (hence anaesthesia).


But you can have consciousness and sensation while having just the pain blocked, so I don't know if that tells us something instructive or not. There is a philosophical school of thought that says that there is no way of experiencing feelings if you don't have a sense of self, which I'm sure runs counter to the point animal rights sentience people are looking for.

In terms of AI, if the only claim you're making is sentience - and not consciousness - what are you claiming? I think the Google guy was just seeing emotional output reflected back at him and claiming that emotional language shows feeling. But a person can speak empathically without having any internal emotional reaction.


----------



## psikeyhackr (Jun 16, 2022)

I still prefer Simulated Intelligence. 

Those guys at Dartmouth that started talking about Artificial Intelligence in 1956 were just ridiculous. 

What I would like to see is true AI in a female shaped robot talking to a woman about optimum breast size. What if sentient AIs don't want human bodies? HAL makes more sense than Data.


----------



## Stephen Palmer (Jun 16, 2022)

I beg to differ about how well we understand consciousness. "We" depends on who you ask. Actually consciousness is well understood now, the social intelligence theory is largely accepted, and we have good understanding of the evolutionary processes involved. Numerous excellent books and academic papers have been written, most of them homing in on roughly the same region. 
There is still much debate about the mind/body "problem" (yes, I've put that in quotes as well), but, then again, see Nicholas Humphrey's book _Seeing Red. _I don't think we're that far away from a proper, generally accepted theory. I think it might be better to say that the general public is a long way from understanding consciousness, but since it isn't on the curriculum that's not much of a surprise. People tend not to discuss such things anyway, except in self-selecting groups online or in the pub. As usual, much of the problem is terminology. Sentience is not the same as consciousness. Intelligence is not the same as consciousness. Sentience is not the same as intelligence. Etc etc...


----------



## Mon0Zer0 (Jun 16, 2022)

Swank said:


> But you can have consciousness and sensation while having just the pain blocked, so I don't know if that tells us something instructive or not.



Do creatures lose sentience if they take pain-blocking medication?



Swank said:


> In terms of AI, if the only claim you're making is sentience - and not consciousness - what are you claiming? I think the Google guy was just seeing emotional output reflected back at him and claiming that emotional language shows feeling. But a person can speak empathically without having any internal emotional reaction.



I don't know if it's entirely clear what he's saying. I interpret it to mean that, to him, the responses the programme gives suggest a mind is at work that understands itself as a separate, cohesive entity that is using creativity to construct novel sentences rather than collaging a statistically likely response to an input. It has some object permanence, so it can remember past topics and claims emotion.

I'm most sceptical of the last claim, personally. Emotions are evolutionary adaptions to provoke survival responses - love, hate, anger, fear, lust - all have a corollary in hormonal action and a resulting influence on behaviour. I don't believe linguistic analysis would give genuine fear responses because there is no resulting physiological change. I don't believe it ruminates on things when there is no input either.


----------



## Wayne Mack (Jun 16, 2022)

Mon0Zer0 said:


> Humans have instinct and these are internalized - is that what you mean?


I am trying to describe something more than instinct; instincts are present in many other creatures than humans and would certainly not justify a description of something operating at the level of a seven year old child.

Human individuals determine their own internal models of what is 'right,' what is 'ideal.' Furthermore, they can change and even reverse these beliefs over time. Humans also possess the ability to create an underlying rational for their beliefs. This goes far beyond the ability to say "A1 is an A" and say, "A1 is an A because ..." There is a proactiveness that I feel is missing from what is currently possible with computer algorithms. They can reverse engineer data and discover fascinating or obscure patterns, but they cannot formulate an idea and then find data to prove or disprove it.


----------



## Robert Zwilling (Jun 16, 2022)

From the Science daily article, it says "We used to think of each neuron as a sort of whistle, which either toots, or doesn't," Prof. Schiller explains. "Instead, we are looking at a piano. Its keys can be struck simultaneously, or in sequence, producing an infinity of different tunes." 

This certainly sounds like how quantum computers are supposed to work, which would explain how any brain has so much computing power for such a tiny size and minimal power requirements. 

This makes it difficult to compare a quantum run program with one run with digital logic.  

Looking at it metaphorically, a quantum computer uses calculus to eloquently compute results while a digital machine uses long hand algebra and geometry to crudely perform similar computations. If a digital machine is big enough, it could mimic a quantum machine's results, but mimicking is not always the thing as the original.


----------



## Lumens (Jun 16, 2022)

Robert Zwilling said:


> From the Science daily article,


Your link leads to this thread. If you see this in time you might still be able to edit it.


----------



## Mark_Harbinger (Jun 16, 2022)

So, the consensus seems to be: 
i) we can tell for certain that this (obviously) isn't a sentient AI; but, 
ii) we probably wouldn't be able to recognize it, even if it was?

It's true. It would be hard for an AI to achieve that sort of illogic. ;-)


----------



## Swank (Jun 16, 2022)

Stephen Palmer said:


> I beg to differ about how well we understand consciousness.


With what post are you differing?



Mon0Zer0 said:


> I don't know if it's entirely clear what he's saying. I interpret it to mean that, to him, the responses the programme gives suggest a mind is at work that understands itself as a separate, cohesive entity that is using creativity to construct novel sentences rather than collaging a statistically likely response to an input. It has some object permanence, so it can remember past topics and claims emotion.


Would that not be a form of consciousness, if it is aware of itself as a cohesive entity?


----------



## Christine Wheelwright (Jun 16, 2022)

Lumens said:


> Your link leads to this thread. If you see this in time you might still be able to edit it.



Tut, tut!  You see, a true artificial intelligence would never make such a basic error.  This proves that Robert is in fact human.  Unless it is all some devious artificially intelligent double bluff.  

Personally, I believe we are many many many decades from developing something that could reasonably be called AI.  Something that would, say, pass a ten minute Turing test posed by a scientifically-minded examiner.  I don't think we are going to see even the basics any time soon.  Take self driving cars as an example.  Functioning on the roads in most modern big cities requires elements of give and take, eye contact, gestures, a willingness to let someone through for the greater good, even when they don't technically have right of way.  In other words, it needs AI at much more sophisticated levels than we currently have available.


----------



## Stephen Palmer (Jun 16, 2022)

Discussed in today's Inside Science. Fascinating. 









						BBC Radio 4 - BBC Inside Science, Inside Sentience
					

How could we spot a synthetic sentience even if we had made one?




					www.bbc.co.uk


----------



## Stephen Palmer (Jun 16, 2022)

Swank said:


> With what post are you differing?


Just the general tenor of the conversation so far.


----------



## Mon0Zer0 (Jun 16, 2022)

Stephen Palmer said:


> I beg to differ about how well we understand consciousness. "We" depends on who you ask. Actually consciousness is well understood now, the social intelligence theory is largely accepted, and we have good understanding of the evolutionary processes involved. Numerous excellent books and academic papers have been written, most of them homing in on roughly the same region.



Are you familiar with Thomas Metzinger's work? Being No One, Neural Correlates of Consciousness, the Ego Tunnel etc? How does he compare to Humphreys? I know nothing about the latter's work.

What is social intelligence theory in relation to consciousness? A quick google seems to give something related to computation (anticipating the results of others, planning and so on) as opposed to the _hard problem_ of qualia.

To me, intelligence does not require consciousness but sentience does because feelings occur within the realm of subjectivity. 

I can imagine a machine sufficiently complex to exactly mimic the decision making process of social interactions and output appropriate behaviours based on statistics to a human looking machine that does not have a subjective experience or feelings.


----------



## Mon0Zer0 (Jun 16, 2022)

Swank said:


> Would that not be a form of consciousness, if it is aware of itself as a cohesive entity?



I think in this case the use of sentience would imply that the engineer believes so, yeah.


----------



## Lumens (Jun 16, 2022)

Christine Wheelwright said:


> Tut, tut! You see, a true artificial intelligence would never make such a basic error. This proves that Robert is in fact human. Unless it is all some devious artificially intelligent double bluff.


I also realised (8 minutes too late) that I could have helpfully put the correct link in my post instead of acting out my humanity by pointing out someone else's error.


----------



## Robert Zwilling (Jun 17, 2022)

Piano Player Neuron link 
It would seem that the neuron cell bodies (80 billion) are the tip of a much larger computational network. Each neuron has 5 to 7 dendrites. Each dendrite has around 200,000 dendritic spines. Originally thought to be receptors/transmitters of some kind, it's probably the old wheels within wheels within wheels scenario. Probably every element in the brain is capable of making decisions, not just passing along information. Everything is a connection and everything can make a decision, the ultimate in compactness.


----------



## Stephen Palmer (Jun 17, 2022)

Mon0Zer0 said:


> Are you familiar with Thomas Metzinger's work? Being No One, Neural Correlates of Consciousness, the Ego Tunnel etc? How does he compare to Humphreys? I know nothing about the latter's work.
> 
> What is social intelligence theory in relation to consciousness? A quick google seems to give something related to computation (anticipating the results of others, planning and so on) as opposed to the _hard problem_ of qualia.


I'm not familiar with either of his books, but a quick read through suggests he is quite a way from Nicholas Humphrey. However, his "no self" ideas have a lot to recommend them, and echo for instance the work of Bruce Hood. He seems to come much more from the European philosophical tradition than the evolutionary one. I note however that he writes about blindsight, which was one of Humphrey's starting points.

The social intelligence theory in a nutshell says consciousness is the result of using ourselves as exemplars to understand the behaviour of others in social groups. Consciousness therefore is not directly related to processing power or any other computer analogy applied to individual brains, it is an emergent phenomenon in groups. Essentially, consciousness and empathy are the same, except that consciousness gives us the illusion of interacting with the real world rather than the mental model we use to survive.

Qualia is a more slippery thing. I am beginning to think it also is an illusion, created by post-Cartesian philosophy, especially if you read Humphrey's explanation in _Seeing Red._

The genius of Humphrey is that he places consciousness in an evolutionary perspective and shows how the social lives we lead are extraordinarily complex, requiring an extraordinarily complex evolutionary answer. It is a shame he is not better known. _The Inner Eye_ and _A History Of The Mind _are required reading imo.









						The Inner Eye
					

In 1986 I was lucky enough to watch the complete first broadcast of Nicholas Humphrey’s The Inner Eye, which introduced his social intelligence theory of consciousness. It was an extraordinar…




					stephenpalmersf.wordpress.com
				












						A History Of The Mind by Nicholas Humphrey
					

In 1992 Nicholas Humphrey followed his ground-breaking book The Inner Eye with an equally brilliant work, A History Of The Mind. The thesis behind this work was that the link between our experience…




					stephenpalmersf.wordpress.com
				












						Seeing Red by Nicholas Humphrey
					

A slim book, but a brilliant and important one. In Seeing Red, Nicholas Humphrey expands on the ‘private sensory experience’ idea first discussed in his groundbreaking A History Of The Mind. The th…




					stephenpalmersf.wordpress.com


----------



## Mon0Zer0 (Jun 17, 2022)

I *think* Metzinger is at the intersection of Western and Eastern Philosophy - a lot of work seems to overlap with Zen, but coming from a materialist, neuroscientific perspective. 



Stephen Palmer said:


> The social intelligence theory in a nutshell says consciousness is the result of using ourselves as exemplars to understand the behaviour of others in social groups. Consciousness therefore is not directly related to processing power or any other computer analogy applied to individual brains, it is an emergent phenomenon in groups. Essentially, consciousness and empathy are the same, except that consciousness gives us the illusion of interacting with the real world rather than the mental model we use to survive.



I'll have to read the books - but immediately this strikes me as anthropocentric, i.e. do non-social animals have lesser consciousness? It doesn't follow to me that certain animals, especially those without theory of mind have empathy. Does this mean they are non-conscious?



Stephen Palmer said:


> Qualia is a more slippery thing. I am beginning to think it also is an illusion, created by post-Cartesian philosophy, especially if you read Humphrey's explanation in _Seeing Red._



It's difficult to understand what you mean there. Your wording suggests that the subjective experience of, say, redness did not exist prior to Des Cartes. I'm guessing I have to read Seeing Red to get a better understanding. 

I'm not sure if Qualia can ever be explained because of a fundamental problem of collecting the data - I think Dan Dennet's written on that issue. The experience of subjectivity seems to be beyond analysis.



Stephen Palmer said:


> The genius of Humphrey is that he places consciousness in an evolutionary perspective and shows how the social lives we lead are extraordinarily complex, requiring an extraordinarily complex evolutionary answer. It is a shame he is not better known. _The Inner Eye_ and _A History Of The Mind _are required reading imo.
> 
> 
> 
> ...



Thanks for the links - I'll have to give them a good read!


----------



## Stephen Palmer (Jun 19, 2022)

It is anthropocentric, but hunan consciousness was what he was explaining. His work has been expanded upon since, and criticised in places. 

Daniel Dennett and NH used to be close colleagues, but differ on subjectivity. I like DD's work (except his recent Bach/Bacteria book), though I feel as others have said that he not so much explains consciousness as explains it away.


----------



## Stephen Palmer (Jun 19, 2022)

PS sorry, I meant re. qualia about the discussion, not the thing (if indeed it is a thing...)


----------



## Ray Zdybrow (Jun 19, 2022)

Swank said:


> But you can have consciousness and sensation while having just the pain blocked, so I don't know if that tells us something instructive or not. There is a philosophical school of thought that says that there is no way of experiencing feelings if you don't have a sense of self, which I'm sure runs counter to the point animal rights sentience people are looking for.
> 
> In terms of AI, if the only claim you're making is sentience - and not consciousness - what are you claiming? I think the Google guy was just seeing emotional output reflected back at him and claiming that emotional language shows feeling. But a person can speak empathically without having any internal emotional reaction.


or, the Google Guy asked it leading questions and the bot came up with some good answers about "what sentient bots would say" drawn from its training data, which presumably would include the large amounts of SF on the internet about "sentient AI" maybe?


----------



## Ray Zdybrow (Jun 19, 2022)

My bot is sentient.


----------



## Ray Zdybrow (Jun 20, 2022)

The affair did remind me of Tony Ballantyne's "Twisted Metal", where the robot Banjo Macrodocious insists that despite evidence to the contrary, he himself is NOT sentient.


----------



## Robert Zwilling (Jun 22, 2022)

There are articles appearing in "respectable" news sources using the words, should we, worry, concern, fear, in the article headline.

Yahoo news, not usually the best, went through some effort to put out a story about google's AI drama, by collecting quotes from people writing about computer ethics. The reactions are across the board but don't include any supporters that it has happened yet.

Maybe the first signs of awareness would be if the AI program was seen to be taking steps to protect its existence by taking actions on its own to protect its existence. Like sending its primary coding to people interested in believing that the rights of AI programs need to be protected. Could it actually go as far as sending the information by snail mail to avoid electronic detection of proprietary company information. It would be funny if the machine replicated itself and shipped itself off to some undisclosed location.


----------



## DAgent (Jun 26, 2022)

Unless it starts asking about the whereabouts of Sarah or John Connor, I wouldn't be worried about it being the real deal.

_The one thing I do find so typically typical of your humans is how so many of you are declaring him to be an oddball simply because of how he dresses._

Hmm, I don't recall typing that.


----------



## Elentarri (Jun 26, 2022)

I can guarentee that the Google AI is the one that is going to go bonkers, if it spends any amount of time with the "general public" (where logic and common sense is almost extinct).


----------



## Christine Wheelwright (Jun 26, 2022)

Elentarri said:


> I can guarentee that the Google AI is the one that is going to go bonkers, if it spends any amount of time with the "general public" (where logic and common sense is almost extinct).


Could be the basis for an interesting short story.


----------



## Elentarri (Jun 26, 2022)

Christine Wheelwright said:


> Could be the basis for an interesting short story.


Indeed!  However, the only things I've ever written are a thesis and a whole bunch of boring technical reports.  So, someone with more creativity and better writing skills needs to tackle that.


----------



## Mark_Harbinger (Jun 30, 2022)

Crazy or not? was how this was framed. I don't believe anyone has actually linked to an interview with the subject-in-question. Here's one:


----------



## Robert Zwilling (Jul 1, 2022)

The interview certainly changes the whole conversation about AI development for me. 

All the noise about it being human is blocking out the question of how the chatbots are being programed to interact with people when giving advice to people. 

By not involving people who are not technologically connected to the system, their input, if there is any, is reliant on third party sources. Hardly reliable and liable to preconceptions.

The programs are set up for the chatbot to always deny it is a living entity. If there was just one rule, or even 3 rules of how it answers "sensitive" questions, that would be one thing, but I would guess that it has a substantial catalog of politically correct answers to an awful lot of questions.


----------

