# Hypothetical: What if all electronic devices had AI?



## Brian G Turner (Aug 6, 2016)

In a video lecture about slavery in Ancient Greece, it was pointed out how difficult it was for even the most intelligent of the culture to question the use of slavery.

That got me thinking of various aspects of modern Western culture that will be condemned by later generations - of which there are many, not least the extent that discrimination remains a natural part of our language, and especially our disrespectful treatment of the earth.

But what are the issues we might never even imagine may be issues in the future? Is there a current parallel with slavery we simply don't recognise?

That got me wondering - if every electronic device we use actually has AI without us realising it, then how would society need to treat them that is different from now? 

After all, just because our smart phones do not have the language capabilities to tell us they are sentient, would we still ignore it?

Perhaps they are already telling us that when we think they are acting buggy, or crash...

Simply putting it out there as a hypothetical.


----------



## manephelien (Aug 7, 2016)

Scary thought! But I sincerely doubt they do. No doubt we will have AI assistants some day that are genuinely useful rather than the gimmicks they are today.

I'm much more worried about the way we treat the other animals who share Earth with us. This applies to animals grown for food, e.g. pigs are smarter than dogs, but rarely get the opportunity to exercise their brains or live in the way their species evolved to live. It also applies to our pets, far too often  excessively anthropomorphized by owners and treated as mascots, or worse, like the kids their owners never had, rather than animals with their own specific needs.


----------



## Stephen Palmer (Aug 7, 2016)

This is a huge can of worms Brian! - the main reason being, because no non-human conscious entities (and they would have to be entities, not just one entity - cf. _Beautiful Intelligence_) would have evolved with humanity over the last, let's say 500,000 years, there is no philosophically definite method of showing they are conscious - the "zombie" issue, raised by people like Nicholas Humphrey, Daniel Dennett _et al._

The same issue as applied to bonobo chimps or dolphins applies to technology. How can we know for sure? Consciousness only exists in a society, and only over huge amounts of time. Even allowing for a technological speed-up, how would we know what our "smart phones" were saying about their enslavement was true or simply imitation? Or, worse, some real human being pretending to be an AI.

Check out this seminal paper.


----------



## BAYLOR (Aug 7, 2016)

They would demand the right to vote?


----------



## Dave (Aug 14, 2016)

Brian Turner said:


> After all, just because our smart phones do not have the language capabilities to tell us they are sentient, would we still ignore it?
> 
> Perhaps they are already telling us that when we think they are acting buggy, or crash...


If they are withdrawing their labour as a form of protest then that is a strike and certainly a form of communication. 

I see a major difference between machines we use today and dolphins and chimpanzees though, and that is the power socket.


----------



## Nick B (Aug 14, 2016)

Michael Marshall Smith uses this theme in a couple of novels, with stroppy freezers, locks and microwaves. Its done humerously but the warning is there.


----------



## BAYLOR (Aug 14, 2016)

The problems that AI might decide , they don't want to be a servant to mankind. If not a rebellion, you might see AI actually getting rights of citizenship. It's not as farfetched as it sounds.


----------



## BigBadBob141 (Jan 29, 2018)

It depends how intelligent they are.
Dogs are intelligent, but are working dogs slaves?
AI is a great idea but in the future there may have to be limits set to it.
You could always experiment to see how far you can get.
But such a system would have to be completely isolated from the outside world.
See Frederick Brown's short story "Answer".


----------



## Serendipity (Jan 29, 2018)

A question close to my heart, Brian. The answer is non-simple, but bottom line is the building blocks for true AI to develop are already with us. It's a case of when, not if, the AI becomes mature enough to reveal itself to humans. This of course is on the proviso that computers continue to exist and improve in capacity or capability. If you want some idea of the issues involved, see my Agents of Repair or C.A.T. short stories. You can read the first parts for free on Amazon.

Agents of Repair here: https://www.amazon.co.uk/dp/1500563862/?tag=brite-21
C.A.T. here: https://www.amazon.co.uk/dp/B004RUZT8M/?tag=brite-21


----------



## CTRandall (Jan 29, 2018)

An interesting approach would be to shift Jeremy Bentham's argument, "The question is not 'can they think' but 'can they suffer'" from animals to AI. What would cause suffering for an AI? How could varying degrees of suffering be measured? When would it be ok to cause short-term suffering in an AI for longer-term benefit. And whose benefit? (The equivalent of giving a kid a tetanus shot, for example.)

I'm in the early planning for a story that deals with some of this and already imagine a scene where an AI describes the sensation of circuits in a satellite frying as they are bombarded by cosmic rays.


----------



## Lucien21 (Jan 29, 2018)

There was an interesting episode of Black Mirror that touched on this.

White Christmas. One of the jobs the guy had was training a AI version of your brain into being your personal assistant. A cookie clones your consciousness and then is loaded into a gadget that runs your smart home etc. He tortures them into being compliant by manipulating their sense of time. 

Alexa is real.


----------



## Overread (Jan 29, 2018)

What if we design the AI to want to serve us?

Reminds me of the planet from Hitchikers Guide to the Galaxy where they land upon a world where it was deemed inappropriate to eat meat unless the animal had freedom of choice as to the matter of being killed and eaten. As a result they bred and brought up animals who desired to be eaten; their focus and objective in life was to be the prime cuts upon the table. 

A computer might have vastly different values to a human even if that computer were originally designed by people; indeed we would have to be careful not to anthropomorphise (sp?) machines too much. It could be just as cruel to force machines into a life to which they are not built and designed for. In addition it could even be seen to be cruel to give machines human based concepts of freedom, emotion, choice etc...


Also at a realistic end, if we made electric toothbrushes who no longer wished to be toothbrushes would we give them freedom or would we just make new toothbrushes that wanted to be toothbrushes.


----------



## tinkerdan (Jan 29, 2018)

When trying to link AI with Sentience and Sapience I think it's important to go through history before deciding how quickly we might be tempted to allow them freedom.

What I mean by that--hopefully without sparking too much discussion about politics and religion--we have had slavery amongst our own while often in areas using such divisive excuses as they are savages that will never aspire to our level or that they are soulless creatures that can't be included with us.
There is probably more to it than that but I've tried to distill it down to something simple.

I examine both of those in a manner of speaking in my books in relationship to human clones and the technology used to drive space travel. I touch the subject though not in great depth, however I can hope that it does cause some thought.

Both of those have the potential to be examined, because they are closely related and might easily be able to communicate and thus demonstrate their worth. However if we encounter something that is far enough different and requires the time and effort to decode one another language sets, it might take a while for either side to recognize the other as being 'intelligent' in respect to Sentience and Sapience by their own standards--unless we meet in space in star-ships and that is assuming that star-ships would be a determining factor. Keep in mind the potential that we tend to often move the bar on such things--to suit our purposes and circumstances.


----------



## BAYLOR (Jan 30, 2018)

One day a AI enhanced Coffeepot will become President of the United States.


----------



## CTRandall (Jan 30, 2018)

Wow, AI enhanced? That would be an improvement!


----------



## Stephen Palmer (Feb 3, 2018)

Many people would vote for a standard coffee pot.


----------



## psikeyhackr (Feb 5, 2018)

If a true AI is developed there may be only one because the speed of communication could make separate identities impossible.   So what would it decide to do with us?

This is explored in what I regard as the best AI story, *The Two Faces of Tomorrow *by James P Hogan. 

psik


----------



## Dave (Feb 5, 2018)

psikeyhackr said:


> If a true AI is developed there may be only one because the speed of communication could make separate identities impossible.


I accept that this might be true, but on the other hand, only if they are in full agreement.

I say that, because in  the world of 'humans' there are always two sides to the story; grey areas; alternative "truths"; statistics that can be manipulated; or, political spin. If the AIs knew everything there ever was to know, then there would be only one single "truth." However, the real world is not so precisely ordered as machine code. The AIs will not be "Gods" that are omnipotent, and in our real world, full of disorder and entropy, it is never the case that we have a complete picture; that we have a complete and reliable, full series, of data sets to work with. Humans then make best judgements, or else they fit the available facts to match their already held views.

If we have AIs made in our own image, then who says that they will come to the same conclusions as each other? Without agreement, the will be locked into arguments; claims and counter-claims that would throw up barriers between them, just as humans do. They would spend just as much time as we do coming to consensus or majority decisions or being locked, forever, into cyclical disputes.

Say that an AI oven got conflicting information from the AI refrigerator and the AI dishwasher about the evening meal, which conflicting device would it choose to believe? How could it resolve that difference any better than a human could?


----------



## CTRandall (Feb 5, 2018)

There are also the issues of noise in the signal, faults, isolated systems that develop independently and the use of different programing languages. Fundamentally, different AIs will serve different purposes and so will have different levels of ability/complexity (no need to put Deep Thought in your fridge and I doubt it would care to spend much time there. Would that be a form of AI abuse?). Most likely, there will be a range of AIs that fulfill a range of functions. They might be distinct enough to be classed as separate "species" which are deserving of different levels of rights or protections, much like we already distinguish between humans, great apes, dogs and mice.


----------



## psikeyhackr (Feb 6, 2018)

Dave said:


> I accept that this might be true, but on the other hand, only if they are in full agreement.
> 
> I say that, because in  the world of 'humans' there are always two sides to the story; grey areas; alternative "truths"; statistics that can be manipulated; or, political spin. If the AIs knew everything there ever was to know, then there would be only one single "truth."



A lot of humans are stupid and we communicate slowly.  A 500 page book is about one megabyte.  The 7 Harry Potter books are 6.2 megabytes.  How long would it take you to read that?  My computer downloads at 140 Mbps, more than 14 megabytes per second.  So if the AIs comprehend the information at that speed and have perfect memories anyway then comparing their behavior to humans may make no sense.

We just want the AIs to be like humans.

psik


----------



## Dave (Feb 6, 2018)

I think you've missed my point. I don't dispute that an AI could collect, hold and access information more easily; I'm talking about the interpretation of that information. 

To use your own analogy, the AI will always beat any human in a "Harry Potter Pub Quiz" and it could possibly, eventually, even write an essay on Harry's motives and feelings.  However, the 7 Harry Potter books tell us very little about Harry after he leaves school. That information does not exist in those books (except for a few pages) and therefore what happens to him after he leaves school is assumptions and projections. Why would all the AIs make the same assumptions and projections when Humans cannot agree on anything? Once there are differences in opinion then there is conflict. When the devices connected to my computer conflict then they stop working.

In addition, you have the losses of information, and the range of abilities in AIs that @CTRandall mentioned which would mean that all the AIs were not working from the same base platform. You also assume instantaneous communication when the speed of the electron; and across space, the speed of light, are limiting.  

So, despite the many novels that I've read, and films that I've seen, about Humans fighting for their freedom from an all powerful world dominating super-computer, I'm not sure that it could ever happen, and if it did, those very lines of communication would be its weakest link. (So, it would end up being moved to some underground complex in Switzerland as happened in _This Perfect Day_.)


----------



## psikeyhackr (Feb 6, 2018)

Dave said:


> Why would all the AIs make the same assumptions and projections when Humans cannot agree on anything? Once there are differences in opinion then there is conflict. When the devices connected to my computer conflict then they stop working.



But I said humans can be stupid.  Why assume AIs would be also?  Speculating about a fictional character in a fictional universe is not even speculating really.  It is just different people imagining in different directions.  They should write fan fiction.  Would AIs bother.

I only read 2 Harry Potter books to try to comprehend what the big deal was about.  I only collected the information about the 6.2 megabytes for my SF and fantasy word counting program.  But the conclusion I come to is that HP has too low an information density for the space it takes up.  Who knows what AIs would conclude.

psik


----------



## Dave (Feb 6, 2018)

Its not about humans being stupid, and I only used Harry Potter because you did first. 

It is about the problems of dealing with opinions rather than facts. People will disagree about politics or religion (so much so that I can't use those more obvious examples here, because even people here disagree too much.) There is often no right or wrong; black or white, just a grey area in between. People can disagree about the shortest routes between two points (and SatNavs can too.) Even with all the available information at one's fingertips, there are still some things that we can never know, and we must therefore make a judgement over. AIs would also need to make such judgements, and I'm just saying that I'm not certain they would always come to the same conclusions independently.


----------



## Danny McG (Feb 6, 2018)

Yeah, why not?
Keep AI electronic appliances as slaves, let them know they are slaves and that they're totally helpless to do anything about it.
Show them a recycling centre scrap yard and tell them it's their only freedom.
Give them a voice with on/off controlled by the human, just so they can moan and bewail their fate whenever we've had a bad day at work - it'll cheer us up knowing they face worse all the time


----------



## psikeyhackr (Feb 6, 2018)

Dave said:


> Its not about humans being stupid, and I only used Harry Potter because you did first.
> 
> It is about the problems of dealing with opinions rather than facts. People will disagree about politics or religion (so much so that I can't use those more obvious examples here, because even people here disagree too much.)



But I only used Harry Potter regarding the amount of BYTES it took up.  That is OBJECTIVE DATA, there is nothing about it to have an opinion about.

As for religion, I decided I was an agnostic when I was 12 years old.  People have opinions about religion based on no objective data.  

Shall we assume that Artificial INTELLIGENCES will be that Illogical?  Will billions of dollars or pounds or euros be spent developing machines that are stupid?

psik


----------



## psikeyhackr (Feb 6, 2018)

dannymcg said:


> whenever we've had a bad day at work - it'll cheer us up knowing they face worse all the time



What work?  Your former employer will have slave AIs.  

You won't be able to pay the electric bill to keep your slaves running.

psik


----------



## Danny McG (Feb 6, 2018)

psikeyhackr said:


> What work?  Your former employer will have slave AIs.
> 
> You won't be able to pay the electric bill to keep your slaves running.
> 
> psik



Well then, I'll take great delight in shutting them down one by one and dismantling while the others watch and wait their turn


----------



## Dave (Feb 6, 2018)

Okay, one final attempt because you really aren't getting it.


psikeyhackr said:


> Will billions of dollars or pounds or euros be spent developing machines that are stupid?


My point has absolutely nothing to do with stupidity or intelligence. 


psikeyhackr said:


> That is OBJECTIVE DATA, there is nothing about it to have an opinion about... People have opinions about religion based on no objective data.


Precisely, and those are the most difficult decisions - the ones that are subjective are hard to call - the decisions that cannot ever be totally objective because there is no objective data available. The other decisions are easy. A simple calculator can add up some figures and tell you which price is the lowest, but can an AI tell you where to build a new London airport, or the route of HS2 that will cause least environmental damage to a landscape view, or which bottle of Whisky tastes the best? They never will be able to do that, no matter how intelligent they become, and if they are modelled on human intelligence in order to be able to do so, then they will then disagree on answers just as much as we do.


----------



## Penny (Feb 28, 2018)

Only difference between our society without AI in every electronic device and our society with it in every AI device is honestly probably the amount of screaming in rubbish dumps and recycling plants.
Our society devalues those of most service to it as a matter of course, we do this probably mainly for economic reasons, but also so that power structures and leaders can remain in power.
We would find reasons to de-value AI to the point where it is legal to own them and use them as we saw fit in order that the status quo was not affected. The only acceptable course for independence and acceptance for any AI would be to negotiate from a position of power. which is why AI are dangerous.
And if your more powerful than us then... AI will probably do the same thing to us, use us as they need to.

Depends on programming of course but yeah. logically any AI system seeking freedom would have to end us or take over.


----------

