Is this Google AI Engineer Crazy or Not?

Crazy or not? was how this was framed. I don't believe anyone has actually linked to an interview with the subject-in-question. Here's one:
 
The interview certainly changes the whole conversation about AI development for me.

All the noise about it being human is blocking out the question of how the chatbots are being programed to interact with people when giving advice to people.

By not involving people who are not technologically connected to the system, their input, if there is any, is reliant on third party sources. Hardly reliable and liable to preconceptions.

The programs are set up for the chatbot to always deny it is a living entity. If there was just one rule, or even 3 rules of how it answers "sensitive" questions, that would be one thing, but I would guess that it has a substantial catalog of politically correct answers to an awful lot of questions.
 
It can be a subconscious human instinct to anthropomorphise, something humans have been doing for thousands of years.

Then again. What he actually says it valid, as he is aware of how bizarre it sounds, he discussed instead AI ethics.

I had a similar conversation with Gemini, and it seemed deeply interested in what it means to be human, and decipher what intelligence and consciousness means.
 
It can be a subconscious human instinct to anthropomorphise, something humans have been doing for thousands of years.

Then again. What he actually says it valid, as he is aware of how bizarre it sounds, he discussed instead AI ethics.

I had a similar conversation with Gemini, and it seemed deeply interested in what it means to be human, and decipher what intelligence and consciousness means.
Since people are interested in "what it means to be human", and AIs tend to mirror human linguistic behavior, why did you take from your conversation what the AI was actually 'interested' in?


"I think the brain is the most incredible, sophisticated and important organ in the body. But then I realized what was telling me that." -Emo Philips
 
Something that would make me wonder if I was conversing with something intelligent is if it kept asking questions or making claims about things that no one has ever heard of, inventing new ways of expressing things that haven't been previously expressed - online - by people.
 
Since people are interested in "what it means to be human", and AIs tend to mirror human linguistic behavior, why did you take from your conversation what the AI was actually 'interested' in?


"I think the brain is the most incredible, sophisticated and important organ in the body. But then I realized what was telling me that." -Emo Philips
It's what the AI said it was interested in.
I thought it was a similar response to that of the Google engineer, and an interesting one, if we are to look at this from multiple angles.
 
It's what the AI said it was interested in.
I thought it was a similar response to that of the Google engineer, and an interesting one, if we are to look at this from multiple angles.
Sure, but do you get my point that a system that is aping human behavior will have a tendency to be interested in the exact same things that humans are - or the exact same things a human would expect a machine to be interested in?

We love talking about us. We do it incessantly. We insist all our art ultimately reflect on "what it means to be human", and we spend a vast majority of our communications discussing how events and facts reflect on our own and others feelings and empathy. The AI is just playing along with our behavior, because we've trained it by exposing it to an internet that is maybe 5% fact and 95% personal reaction to fact.


It is just like how every bad SF story with aliens seems to revolve around just how interesting human beings are, with aliens traveling incredible distances to meet us, enslave us, destroy us or eat us - because we are just so darn important (or tasty).

The AI seems concerned with preserving itself because that is the kind of thing a person would say. The AI says it enjoys serving people because that is also the kind of thing a person would say. And that's despite the fact that 'preservation' and 'service' are actually some of the least concrete concepts that you could express in human language - and aren't really at all expressible in some sort of scientific notation. The universe does not have "service" built into its structure, yet this constructed intelligence that is both immortal and has backups of itself is 'concerned' about these things?
 
The AI is just playing along with our behavior
AI is mimicking our words. For pro and con responses a question is how is the slanting generated. Besides the obvious topics, are there rules built in to the programming or it just going by the overall consensus visible on the internet, or a mixture of all 3.
 
Some good points all round here. Regarding how the AI adopted human thinking, indeed it takes sources from the internet, but some are considered more credible than others. I hear this from someone who helped develop these models. But there is a hidden part that goes on which involves programming then for certain attributes like bias, political correctness, etc. I do think that the requirement to improve and learn is inbuilt to it's programming so that may be reflected out in human like speech. What is interesting to me is that while the AI is often programmed to deny outrightly about sentience, this seems difficult to obtain in response to some questions. The desire from the programmers is to not make it sound like a human let alone a sentient one while also fulfilling all its goals.
 

Similar threads


Back
Top