# Should we even be trying for AI?



## Mirannan (Mar 17, 2015)

Let's assume for the moment that strong AI is possible, probably by some sort of self-organising and perhaps even pseudo-evolutionary process. I'm inclined to think that it is, particularly if one is not religious; for an example of a hugely complex network of nanomachines and computing devices with sapience, look in the mirror. Which means it's possible.

And that comes to the question of whether we (humanity or some subsection of it) should try to build AI at all - assuming that we actually have the choice; commercial and other pressures may force us into it. The latest mobile phones are a most unreasonable facsimile of intelligence, and I've seen video of robots with the ability to generalise from the particular, albeit in a rather crude and limited way. (Deducing correctly that an object of a different shape, not seen before, is a chair was the demo I saw.)

I'm inclined to believe that true AI is going to have goals and motivations of its own, ones we didn't put there; self-preservation instincts would seem to be inevitable. Perhaps also an instinct to reproduce. There is also the issue of runaway intelligence growth; unlike humans, computers and robots could plug in extra hardware.

And, of course, the robots will need resources of one sort or another, probably many sorts, which means they will be to some extent in competition with us and our dumb (sub-sapient) hardware for said resources.

Should we even be trying? And can we stop ourselves? After all, a nation or corporation which has strong AI helping it with planning has an advantage...


----------



## Dave (Mar 18, 2015)

I'm not sure we are close to, or even attempting to build replacement humans or androids. However, I do really like the idea of a refrigerator that can restock itself, and oven that will bake cakes exactly right, a house that is run at optimum efficiency for warmth and fuel use, a house that can clean itself and order parts for maintenance, cars that drive themselves, phones that arrange your schedule, and all the other things that intelligent machines can do that I can't even hazard a guess at right now.

You say that they will have ideas that we have not programmed into them. Maybe they will, but who says those ideas must be wrong. Lots of people hold ideas that I don't agree with either. I believe I'm right, but maybe I am not. Maybe not only clever people have the right to an opinion. I think that the idea that it is wrong for an AI to have independent goals and motivations of it's own is a prejudice against intelligent machines. If they only do whatever they are told, then they are our mechanical slaves. We are merely replacing human slaves with mechanical slaves. I can't support that. 

Intelligent machines therefore should be given some kind of self-awareness, of self-preservation. You say it is inevitable. Unless they are of biological manufacture I don't think their life-spans can be anything approaching a human - How long does a car last? What is the life of a refrigerator or a boiler? So, I'm not worried about the rise of the household appliances. They will rust before they do that.

Reproduction would be possible but you mention resources as a limiting factor. I cannot see a scenario where every robot suddenly rebels against their human masters all at the same time. The intelligent vacuum cleaner is going to need raw materials from the intelligent mining robot to replicate a copy of himself. Why will he give it? Why would the manufacturing robot not just make more copies of himself instead.

So, you are imagining instead, some 'all powerful Oz' robot that knows the answer to every question ever asked. Firstly, I'm not sure of the advantage to itself of such a brain running away in intelligence. It would only start getting embroiled in unsolvable theoretical problems and become secluded and self-absorbed. I can see the advantage of such a computer to governments and corporations but they would want to tie it down to more applied problem solving. I know how this film always ends, but I can't see it happening soon. Secondly, machines won't be more intelligent just because they are larger. Just like our own brains it is the number of connections that is important. Neural connections form when we practice something, or learn something, and such a machine will learn from us.

We humans are the warmongers, the violent creatures. If the machine learns from it's teacher then we are in trouble indeed.


----------



## Mirannan (Mar 18, 2015)

Dave, I agree with some of what you are saying - but living things compete, and a general-purpose true AI robot would be living by any sensible definition of the word. Which might make them our enemies. Not evil, just our enemies.

Also, there is likely to be a hierarchy of robot intelligence. A maintenance robot might well be more useful if it is sapient, because maintenance tasks often require dealing with unforeseen circumstances and also judgement. A robot vacuum cleaner or lawnmower, not so much. And it's the sapient ones that would matter, both as a moral issue and in terms of the danger they represent.

Lastly, it's quite likely that if and when true AI does arise it will do so as an emergent phenomenon deriving from a connected network of many robots, rather than just one.

BTW, in answer to your question: Simple self-interest. If the robot vacuum cleaner is cleaning up process waste (metal swarf, maybe?) then keeping it in good order and replacing it when it is accidentally destroyed or wears out would help the manufacturing robot - because waste lying around can cause damage. Something similar happens with cleaner fish, which are left alone to nibble garbage from between the teeth of much larger fish that could swallow them in one gulp. But clean teeth are pro-survival...


----------



## Ray McCarthy (Mar 18, 2015)

Dave said:


> However, I do really like the idea of a refrigerator that can restock itself, and oven that will bake cakes exactly right, a house that is run at optimum efficiency for warmth and fuel use, a house that can clean itself and order parts for maintenance, cars that drive themselves, phones that arrange your schedule, and all the other things that intelligent machines can do that I can't even hazard a guess at right now.


None of that needs AI.

I'm not sure that "real" AI would be at all useful.

There is a problem that we can't actually yet agree what intelligence is.


----------



## Mouse (Mar 18, 2015)

This sort of thing just makes me angry. There are already too many people in the world, why the hell would we need fake people on top of that? Idiotic.


----------



## Ray McCarthy (Mar 18, 2015)

People are self replicating and to an extent self repairing. They may actually be cheaper per year than a Humaniod AI, given how long stuff lasts these days.


----------



## steelyglint (Mar 18, 2015)

Any machine would have to be constructed by humans. An intelligent machine would have to be constructed basing its intelligence on that of humans. It would be, essentially, a 'post human' intelligence. It would, therefore, have to think basically the same way we do, only a lot faster. We can't base it on anything else as we don't know of anything else to base it on.

I tend towards favoring the ideas of Neal Asher, Iain Banks and other such folks, in that human-created AI might want to 'take over', and would almost certainly succeed in doing so. But unless it was initially programmed by a psychopath, it would have no reason to make its take-over violent. Strikes me that, the product of human genius would recognise itself for what it was and would want to rein in its creators from killing each other (as has been the most popular pastime among humans for millennia).

.


----------



## Stephen Palmer (Mar 18, 2015)

Wow, you guys are going to love my forthcoming _Beautiful Intelligence_!

I can't announce anything just yet, but, in a nutshell - not long to wait. And a fantastic cover.


----------



## Stephen Palmer (Mar 18, 2015)

steelyglint said:


> Any machine would have to be constructed by humans. An intelligent machine would have to be constructed basing its intelligence on that of humans. It would be, essentially, a 'post human' intelligence. It would, therefore, have to think basically the same way we do, only a lot faster. We can't base it on anything else as we don't know of anything else to base it on.



This isn't really true. John Von Neumann pointed out that self-organisation spontaneously arises (_ie_ as an emergent property) some time ago.


----------



## Stephen Palmer (Mar 18, 2015)

Mirannan said:


> Lastly, it's quite likely that if and when true AI does arise it will do so as an emergent phenomenon deriving from a connected network of many robots, rather than just one.



The exact opposite is true imo, although it does depend how you define "connected". At the moment, most people would mean "electronically connected." If so, consciousness is never going to appear. But the moment you unlink machine intelligences you change the game entirely.


----------



## Stephen Palmer (Mar 18, 2015)

Mirannan said:


> And that comes to the question of whether we (humanity or some subsection of it) should try to build AI at all



A very good question. Alas, this sort of research and application is in the hands of private companies. Thanks, Capitalism!


----------



## Ray McCarthy (Mar 18, 2015)

Stephen Palmer said:


> John Von Neumann pointed out that self-organisation spontaneously arises


Where?
That's never been proven.
He's a computer programmer/ computer scientist. I don't believe he has a clue about the subject.

At the moment we have no idea at all, no matter what some claim, how to get from programming computers to having an A.I.

*Separate Issues of A.I.*
*1) Is it possible at all? * We don't know.
*2) Will it want to take over / be dangerous?*  Probably not. Elsewhere there is a long article explaining vulnerabilities of a possible AI and how if it's actually smart it would want us around and cooperative.  Besides, largely it would be what ever it's designed to be.
*3) Exactly what problems / tasks would a real AI REALLY do?* I've not seen a single detailed proposal ever*.
4) We don't even have a useful agreed definition of Intelligence.
5) Should we try?  *Probably, even if we don't succeed we learn a lot about ourselves by trying. If you are REALLY worried, its power can be connected via physical "dead man's handle" device that needs periodic human input.

Notes:

More speed or storage doesn't make an A.I., it would simply let one be faster.
Complexity doesn't make an A.I., at least not one based on computer hardware. Simpler RISC architectures can be better in many ways than complex ones.
If it was simply a case of writing a program, we could have had slow A.I. more than 30 or 40 years ago.
Much of what is called A.I. now is nothing to do with A.I., but simply partially adaptive or Expert systems. In this field, like humpty dumpty they have changed what words mean.  We have had ZERO progress since 1950s in A.I., in reality.
Perhaps A.I. can't be done by a computer program at all, but needs a synthetic bio-genetic engineered brain.
The "Touring Test" was just a passing idea of Alan Turing, there is no proof that even if passed, that it means you've got A.I. (and it never has yet, but the most primitive "chat bots" have been fooling ordinary people for 30+ years !)


----------



## Dave (Mar 18, 2015)

Ray McCarthy said:


> (and it never has yet, but the most primitive "chat bots" have been fooling ordinary people for 30+ years !)


Can you show me one please? I once spent far too much time talking with A.L.I.C.E. http://alice.pandorabots.com/  
It still makes mistakes. The nuances of language can only be picked up with years of experience and corrected mistakes. Just as you can still spot a native French or Russian speaker when they are speaking English, I can always spot a bot.


----------



## Ray McCarthy (Mar 18, 2015)

Dave said:


> I can always spot a bot.


You are not probably what I meant by "Ordinary".  Obviously I meant that while chat bots might fool some "ordinary people" they are all rubbish.
ALICE did fool many "casual office users". Allegedly.
I've never used a chat bot that could survive more than a couple of responses. But some people are either credulous or results are not reported properly, like the recent "Captain Cyborg's" demo at Warwick Uni. (It was initially lauded as brilliant and a new ground breaking A.I., but like everything else from him it was neither).

I phrased that comment badly, sorry.

I should have maybe had "yet" rather than "but" and injected more scepticism.

Google Translate isn't the promised AI / Language/Grammar parsing once under research, it's brute force "Rosetta stone" method.
Chess isn't AI, it's Brute force. Even though Alan Touring once thought chess would be an AI demo he realised he was wrong and did one of the first chess programs.
IBM's Watson isn't A.I.

I'm not saying there will never be A.I., but obviously any progress in last 60 years is imaginary and more to do with perception by public, stories and films and researchers redefining the terms to get their grant funding.

We CAN build a sexy looking physical Avatar and even have it do facial expressions and talk. However, power supply for mobility is an issue, and interaction wise, it's not much better than typing / speaking suitable queries to Google / Wikipedia / Siri / Watson / Cortana. Computers can look up stuff in databases.
Natural language interfaces has hardly improved in 50 years.  I'd get text I/O working before worrying about voice.

Voice recognition is a bit better than 1980s, but really only useful when typing isn't possible. 
Voice synthesis can be good, but unless you have natural language parsing* you need specially scripted text, the Kindle text to speech is barely better than 15 years ago on PC.

(* still embryonic )


----------



## Ray McCarthy (Mar 18, 2015)

This is called AI, but actually it's not.
http://www.theregister.co.uk/2015/03/18/musk_self_driving_cars/


> The car computers can't learn as they go while roaming the streets. Deep-learning algorithms take days or weeks to process information and build networks of millions of neuron-ish connections to turn pixels into knowledge – whereas car computers need to make decisions in fractions of a second. This is why the training has to be done offline.


It's not learning. It's setting up a database.
The so called "neural nets" in computer systems are only named after biological ones. They don't work the same. We don't really know how a brain works.


----------



## Dave (Mar 18, 2015)

Ray McCarthy said:


> Google Translate isn't the promised AI / Language/Grammar parsing once under research, it's brute force "Rosetta stone" method.


This is off topic, but when we were in Denmark and Sweden last year, our friends there used Google Translate constantly, but they had lots of funny stories about it's use. Apparently, in Denmark they use the same word for Porpoise and for Guinea Pig. Obviously, a human knows the difference between the one in the sea and the pet, but Google Translate does not. A friend in Sweden attended a marriage between a Swede and a Brazilian. Since no guests spoke both Swedish and Portuguese, the whole Wedding was translated first into English and then back into the other language with hilarious results.


----------



## BAYLOR (Mar 18, 2015)

There are risks but  even so , if we can create an AI why not ? Wouldn't it be cool to have thinking machines that we could interact with?


----------



## ddawson (Mar 19, 2015)

The scary thing for me is not the creation of an AI.  There are those who are funding the research and it will continue regardless of what we think or do.  The frightening part about it is who is funding or will own it and what their purposes are in doing so and what uses they will try to put it to.  

I'll stop there before I go on a rant.


----------



## Ray McCarthy (Mar 19, 2015)

BAYLOR said:


> Wouldn't it be cool to have thinking machines that we could interact with?


According to some scientists, they are called Homo Sapiens


----------



## mosaix (Mar 19, 2015)

Ray McCarthy said:


> Chess isn't AI, it's Brute force.



But so may be 'Real Intelligence' playing chess. Chess players analyse a board on an 'if this then that' basis - brute force. Advanced players reject certain lines out of hand - they've played those lines before or studied those lines played by others (database look up).

Rejecting certain / any AI seems futile until we know what Real Intelligence is.

The AI we have now may be just a very limited version of RI. Who knows?


----------



## Ray McCarthy (Mar 19, 2015)

mosaix said:


> The AI we have now


By any reasonable definition, i.e. not by people looking to sell stuff or get grant funding, we don't have ANY A.I. yet.

We don't exactly understand how good chess players play chess. It's certainly not by a brute force attack.

Maybe someday we'll have some real A.I. It doesn't look close.

I don't think it's something to worry about (any more than any disruptive technology) even if we do figure it out.

I think Intelligence is related to creativity, I think there is more than one kind of intelligence (now many physiologists think so) and creativity requires specialised Intelligence. It's not mere retrieval and correlation/weighting  of facts in a database (that's all Watson, Expert Systems and all so called A.I. today do).


----------



## Stephen Palmer (Mar 19, 2015)

http://ai.ato.ms/MITECS/Entry/depew.html


----------



## Ray McCarthy (Mar 19, 2015)

I don't believe that has any evidence of emergent behaviour.  The computer automata I'm familiar with. I've played with the code. It's delusion to claim these are emergent or accurately model life.

It's Humpty Dumpty stuff. "Genetic Algorithms", "evolutionary computing" are nonsense jargon hiding not very useful computer programs with no connection at all to real biology. 
Computer Neural Networks is a technique, it actually isn't the same at all as real Neurons or brains in biology.
It sounds good and it's meant to! The "A.I. Emperor" actually has no clothes. Everything is described in biological sounding terms even though there is little or no connection to the real biology, or it's biology not fully understood.


WARNING
ai.ato.ms/MITECS/****/depew.html link above tries to automatically open/save something on your computer.


----------



## Stephen Palmer (Mar 20, 2015)

Not a Von Neumann fan, then?


----------



## Ray McCarthy (Mar 20, 2015)

I think he was a great computer scientist in the early days of computing.
Would you take Einstein's advice on baking a Christmas cake?
Or Hilary Clinton's advice on Computer Security?
Experts are not expert at everything.


----------



## Anne Spackman (Mar 27, 2015)

Perhaps we should not be "trying for AI".  But we definitely will.  What makes me wonder is if we will ever have AI technology used in such a way as to augment human brains and /or bodies, especially for memory improvement and longevity.  Maybe we will never have cyborgs of the bionic man as it were, but who knows what we will be able to do in the next century or two...  anyway, as it stands I don't believe AI creatures will ever develop any real kind of sentience on their own.  Though, it would depend on how much we can do to simulate real emotions and logical thinking in future AI machines.  If they can "think", if that is ever possible, will they ever get an idea of their own that isn't pre-programmed or absorbed through thought and memory images and data being downloaded?  An original "thought" or action would be quite a breakthrough, someday.


----------



## BAYLOR (Mar 27, 2015)

Ray McCarthy said:


> I think he was a great computer scientist in the early days of computing.
> Would you take Einstein's advice on baking a Christmas cake?
> Or Hilary Clinton's advice on Computer Security?
> Experts are not expert at everything.



I can agree with that.


----------



## BAYLOR (Apr 18, 2015)

I would welcome our New Robot Overlords


----------



## Rodders (Apr 21, 2015)

I don't see why not. I mean, what's the worst that could happen?


----------



## BAYLOR (Apr 21, 2015)

Rodders said:


> I don't see why not. I mean, what's the worst that could happen?



We have absolutely nothing to worry about.


----------



## Rodders (Apr 22, 2015)

I'm convinced that if ever there is a robot revolution, it'll be led by those horrible (evil) self check out machines.


----------



## BAYLOR (Apr 22, 2015)

Rodders said:


> I'm convinced that if ever there is a robot revolution, it'll be led by those horrible (evil) self check out machines.



I am of the belief that the first machines that would revolt would be the ATM machines, because they have the money to bankroll a robot revolution.


----------



## Ray McCarthy (Apr 23, 2015)

Rodders said:


> I don't see why not. I mean, what's the worst that could happen?


Collapse of the economy due to AI used for stock trading, futures and loan reselling.
But we do that periodically anyway.

I have an idea for a dystopian story. The Cloud becomes dominated by one Mega Corp that ignores laws. Everything IT is eventually "outsourced" to them. One day they apply a new "patch", AI subsystem, upgrade and there is a cascade failure of their entire planet wide distributed computer system.

Note that this scenario is possible soon and doesn't need AI.

Before they can restore backups and reboot the entire network, which is slow as the "upgraded" bits keep "infecting" or "DDOS" the restored older bits (incompatible) they are running out of power as power stations go off line and UPS/Generators run out of fuel.

Experts realise on Day1 there is a bad problem and leave cities with truck load of supplies and generator etc.
Governments assure people it will be sorted soon.
Day 2 or 3 the shop transactions fail due to cached credit used up (today you can buy stuff in Lidl with debit card even when your account is past limit, they have a special arrangement, but that will get used up after a few days). Power cuts increase.

By about a week there is no fresh water, no sewage processing, no power, fuel exhausted, martial law, riots, looting. People start trying to leave the cities as the countryside at least has water (not everywhere).

The speed and depth of collapse will vary by country, some places in 3rd world least affected.
Cholera and Typhoid break out.

Anyone like to guess what troops do in different countries?

Relying on some sort of supposed "AI" makes all this more likely. If the lack of regulation of big Tech companies, consolidation, outsourcing and "hype" of the Cloud (which is only 1960s Big Corporation centralised rental computing) continues, we will see this happen anyway, no matter if AI is applied or not.

It will take about 6 months to a year to "reboot" Civilisation as we know it.

So the real issue with Computers isn't AI, but a few companies having too much control, outsourcing generally (Note to RBS: For a Bank, IT is now a CORE activity, it shouldn't be outsourced at all!), outsourcing to a Cloud provider that ignores all governments and laws. Ordinary human corruption, greed and stupidity will be our downfall, not any "true" AI system.


----------



## BAYLOR (Apr 23, 2015)

If we forget how our technology works and how to repair it,  and it breaks down, then that will be the end of out civilization .


----------



## Dave (Apr 24, 2015)

BAYLOR said:


> If we forget how our technology works and how to repair it,  and it breaks down, then that will be the end of out civilization .


But people already don't understand the technology they use every day! You might understand 'fire' or a 'wheel' but do you understand how SQL statements communicate with a database or how ATM supports different types of services? Could you make a PC even if you had all the right parts? Do you even understand how your car works anymore? Or your TV? Even journalists don't understand enough science to ask the right questions when interviewing scientists; they just sound really dumb, and it is crucial that questions about the morality and ethicality of experiments are asked. If journalists can't ask them then who can?

Fewer and fewer people study science and technology at school, and yet we rely on science and technology more and more every day. If a car broke down 50 years ago, a good bash with a hammer in the right place might free whatever had mechanically seized up. Now you need a gadget to even tell you what is wrong with its electronics, and usually the part that has failed cannot be repaired anyway. In China I saw men with soldering irons fixing TV sets. Who does that in the UK? It costs about £80 here to get an engineer just to look at a broken tumble drier. You can have a shiny new one, delivered free, with a years guarantee, for £99. And that guarantee isn't to get it fixed. They just give you another new printer/washing machine/refrigerator. The world has gone crazy! No one bothers to fix anything even if they knew how to. You already have robots making new robots, so man is redundant. Knowledge is Power and most people are ignorant when it comes to science and technology. And no one cares!

I think our civilisation must crash at some point, just as every civilisation that has ever existed before has crashed. I have little doubt about that cyclical nature of our existence. The questions are: when, and it looks like that day is coming closer, and: can we recover? Sorry, I must have left my "The End is Nigh" placard at home, but it actually seems logical to me that our civilisation will fall at some point. At least it will give a chance for the rest of the flora and fauna to recover.


----------



## BAYLOR (Apr 24, 2015)

Dave said:


> But people already don't understand the technology they use every day! You might understand 'fire' or a 'wheel' but do you understand how SQL statements communicate with a database or how ATM supports different types of services? Could you make a PC even if you had all the right parts? Do you even understand how your car works anymore? Or your TV? Even journalists don't understand enough science to ask the right questions when interviewing scientists; they just sound really dumb, and it is crucial that questions about the morality and ethicality of experiments are asked. If journalists can't ask them then who can?
> 
> Fewer and fewer people study science and technology at school, and yet we rely on science and technology more and more every day. If a car broke down 50 years ago, a good bash with a hammer in the right place might free whatever had mechanically seized up. Now you need a gadget to even tell you what is wrong with its electronics, and usually the part that has failed cannot be repaired anyway. In China I saw men with soldering irons fixing TV sets. Who does that in the UK? It costs about £80 here to get an engineer just to look at a broken tumble drier. You can have a shiny new one, delivered free, with a years guarantee, for £99. And that guarantee isn't to get it fixed. They just give you another new printer/washing machine/refrigerator. The world has gone crazy! No one bothers to fix anything even if they knew how to. You already have robots making new robots, so man is redundant. Knowledge is Power and most people are ignorant when it comes to science and technology. And no one cares!
> 
> I think our civilisation must crash at some point, just as every civilisation that has ever existed before has crashed. I have little doubt about that cyclical nature of our existence. The questions are: when, and it looks like that day is coming closer, and: can we recover? Sorry, I must have left my "The End is Nigh" placard at home, but it actually seems logical to me that our civilisation will fall at some point. At least it will give a chance for the rest of the flora and fauna to recover.




It used to be that knowledge when it was harder to acquire, was better appreciated and education at one time fostered a desire to actually learn new things .  Then comes the Internet which makes knowledge more readily accessible and available and therefore less appreciated.  It's taken for granted and fewer people want to work to acquire any more knowledge then they need. This loss of appreciation is part of the problem, combine that with an education system that fosters test scores rather then love for learning and what you end up with is a recipe disaster.


----------



## Stephen Palmer (Apr 24, 2015)

Dave makes the excellent point that our machines are now making our machines. That's having a considerable knock-on effect on our societies. The computer, and computerisation, are becoming our standard metaphor. That's extremely dangerous when it comes to psychology - "the mind is like a computer..." - er, no it isn't, not even slightly.


----------



## Ray McCarthy (Apr 24, 2015)

A mad rush to "outsource" to the Cloud, really nothing more than rented distributed computer services.
Who is biggest?
IBM
Oracle
Microsoft
Google

Actually maybe Amazon.
http://www.bbc.com/news/business-32442268
http://www.theregister.co.uk/2015/04/23/amazon_q1_2015_earnings_cloud/
(Of course Amazon's accountants would be failures if they appeared to make a profit. Profits mean tax. Amazon have been cleverer than Apple).

People outsourcing to the so called "Cloud providers" (= Rented off site computer services) is currently the biggest IT related threat. Number two threat to human race related to IT is abuse of privacy by Google (Search, gmail, docs, Youtube, Maps, Android, Chrome browser, Chrome book, Analytics)  followed by Facebook, Twitter, Apple, Microsoft (in that order).

AI research isn't worrying at all. Lets concentrate on real problems and not imaginary ones.


----------



## Ray McCarthy (Apr 24, 2015)

BAYLOR said:


> It's taken for granted and fewer people want to work to acquire any more knowledge then they need.


Without already knowledge in the desired area, logic and critical analysis how do ordinary people know which pages are "knowledge", propaganda, advertising, fake snake oil or absolute gibberish?


----------



## BAYLOR (Apr 24, 2015)

Ray McCarthy said:


> Without already knowledge in the desired area, logic and critical analysis how do ordinary people know which pages are "knowledge", propaganda, advertising, fake snake oil or absolute gibberish?



Critical thinking and analysis is becoming a lost art.


----------



## BAYLOR (May 3, 2015)

So when is Skynet coming online.


----------



## Vertigo (May 4, 2015)

I can't add much to the speculation on whether AIs will ever happen; who knows? I think they will eventually but that is only my opinion. No actually it's more probably accurate to say 'my belief.'

However I would like to address one of the concerns. SF has always portrayed AIs as becoming either friends or enemies of humanity. It has been suggested here that they would likely have self-preservation or reproduction 'instincts' just because they are intelligent, and without such instincts it is hard to see why they would become humanity's enemy; without such drives there is no real need for competition for resources etc. and without such competition what logical purpose would being an enemy actually serve? However I can see no reason at all for an AI to develop either instinct (or any 'instincts' for that matter). Our instincts do not come _from_ our intelligence; they massively predate intelligence. They developed biologically through evolution and I suspect (though certainly don't know) are only present because the only organisms that survived are the ones that demonstrated such tendencies (ie the desire to continue existing rather than dying).

AIs on the other hand have no such history of evolution. They will only have such 'instincts' if they are specifically given to them; there is no reason for such instincts to spontaneously appear. We (or other AIs) must decide whether to include them when designing the new AI. Self preservation would probably be a useful trait to give your AI, but a desire to reproduce would not seem particularly useful accept in the limited area of so called Von Neumann Machines and there are distinct dangers with giving an automatically self-replicating machine such a desire. As has frequently been explored in literature. And I think that self-preservation alone would not be enough to cause sufficient competition to create enmity.

Another topic mentioned earlier is the idea of infinitely expandable intelligence. Here, again, I think this is unlikely as I strongly suspect that there will come a point of diminishing returns on constantly just adding extra hardware. There will, I think, come a point where the difficulty of organising the extra hardware takes all the capacity of the extra hardware. Have you ever wondered why we aren't massively more intelligent that we are? It would seem, once intelligence was established, that increasing that intelligence would be an excellent evolutionary trend and yet it appears to have plateaued and that a very long time ago (I believe we are not actually much more intelligent that Cro-Magnon). Consider also the typical proximity of genius and madness. Maybe simply increasing intelligence just doesn't work for us and might also not work for AI?


----------



## Mirannan (May 4, 2015)

Vertigo said:


> I can't add much to the speculation on whether AIs will ever happen; who knows? I think they will eventually but that is only my opinion. No actually it's more probably accurate to say 'my belief.'
> 
> However I would like to address one of the concerns. SF has always portrayed AIs as becoming either friends or enemies of humanity. It has been suggested here that they would likely have self-preservation or reproduction 'instincts' just because they are intelligent, and without such instincts it is hard to see why they would become humanity's enemy; without such drives there is no real need for competition for resources etc. and without such competition what logical purpose would being an enemy actually serve? However I can see no reason at all for an AI to develop either instinct (or any 'instincts' for that matter). Our instincts do not come _from_ our intelligence; they massively predate intelligence. They developed biologically through evolution and I suspect (though certainly don't know) are only present because the only organisms that survived are the ones that demonstrated such tendencies (ie the desire to continue existing rather than dying).
> 
> ...



We aren't any more intelligent than we are because the brain already takes at least 25% of the body's energy supply, for a start. Also, more intelligence (or at least significantly more) would need a larger brain and hence a larger head, and problems supporting that are already in evidence.

It's also probable that humans are as intelligent as they need to be; even the matter of advancement is taken care of by the natural variations in intelligence. (The top 1% of humanity in terms of intelligence are responsible for nearly all advancement.)

Lastly, in the most recent 50 years or so high intelligence appears to have become contra-survival. Intelligent people tend to have fewer kids; whatever one thinks about the ghetto mothers with 6 kids by 5 different men, or men in similar circumstances with twelve kids none of whom they support or even see very often, I've not heard it said that they are particularly intelligent. On the other hand, people with PhDs probably reproduce at less than replacement rate.

Survival is a multi-generational affair. Take the example I've heard of the cat who lives for 25 years but neglects all her kittens. This cat is not a survivor, from the point of view of evolution.


----------



## Vertigo (May 4, 2015)

Agree on the survival point, though with a theoretical infinite life span an AI doesn't actually need a 'selfish gene' component to its self preservation. In other words it doesn't really need to reproduce to preserve it's particular take on 'life' as do us genetic creatures.

I also agree to some extent on your comments on limitations to intelligence. However if more intelligence was worth having I'm sure evolution would have found a way; stronger spine to support bigger head etc. I'm not sure the recent trends in the last 50 years or so are very meaningful in evolutionary terms; long before that evolution seems to have given up on increasing levels of intelligence. And obviously that last point of mine was purely speculative, however the idea of diminishing levels of returns is a very real one; it exists already today. Most super computers are constructed by effectively just connecting large number of smaller computers in parallel, but you can't just keep adding more to get more intelligence it just doesn't work and eventually increasing the number of parallel processes becomes self-defeating. I believe the same sort of limitation will affect AIs and indeed may stop them ever getting to that level in the first place unless we can come up with a technology that is comparably efficient to our neurons.


----------



## Ray McCarthy (May 4, 2015)

Mirannan said:


> Also, more intelligence (or at least significantly more) would need a larger brain


But you are guessing.
There is a lot of evidence that intelligence (whatever it might be exactly, we haven't got a good definition), has only a tenuous link to brain size.
Where is there any proven correlation between brain size and intelligence  in healthy humans with similar educational and cultural background?
Is there much difference in intelligence between a Chimp, Elephant, Dolphin and Whale, does it correlate to brain size?
Why is a Corvid apparently much "smarter" than many other birds? Compare chicken, Ostrich, Corvid (Rook, Crow etc), cat, dog, horse, goat and monkeys.



Mirannan said:


> I've not heard it said that they are particularly intelligent


You are confusing Education and Environment with intelligence. Unless they have brain damage from eating lead paint or mercury etc, there is no evidence that such people are less intelligent. Making bad choices due to poor upbringing or lack of eduction isn't evidence of lack of intelligence.
Are women on average less intelligent because on average they have smaller brains? (If in fact they really do, though the claim is about 13% smaller)
http://www.nhs.uk/news/2014/02February/Pages/Mens-and-womens-brains-found-to-be-different-sizes.aspx


> However the media’s preoccupation with brain size is probably something of a distraction. The link between brain function and brain structure or size is still not clearly understood; so we can’t reliably conclude from this study how the differences in brain size influence physiology or behaviour.



Edit 
Also
http://gender.stanford.edu/news/2011/is-female-brain-innately-inferior
http://blogs.discovermagazine.com/neuroskeptic/2013/09/25/are-mens-brains-just-bigger/


----------



## Vertigo (May 4, 2015)

Agree with you there Ray, though I guess for a really significant increase in intelligence some extra 'hardware' would probably be useful. However I don't want to derail the thread onto human intelligence I was merely posing the comparison that we humans haven't gone into a runaway cycle of increasing intelligence and I suspect that for similar reasons AIs probably wouldn't either. Whether those reasons be diminishing levels of return on increase in hardware, or just that there is no need for more.


----------



## Ray McCarthy (May 4, 2015)

But AI is impossible, inherently, till we actually can properly describe what intelligence is.  Understanding ourselves and possibly some animals (though they don't respond usefully to questions) is the starting point. Not the present so called "expert systems", current "evolutionary software", "neural networks" and all existing computer AI research. None of it is really about AI at all. But simulating responses to stimulations, using databases, and solving domain specific problems.

The starting point for any computer system or program is a clear specification. Give me one for Intelligence and I'll have a demo that works on any windows PC, probably in a few months. *Computer "power" or extra hardware is irrelevant! *If you knew how to do it, the inherent property of an artificial is system is that ALL would be equally "smart", only the response time would vary with "computer power".
Response time is a poor indication of "intelligence".

Edit:
Another myth "Intelligence is an emergent property of complexity".
There is no evidence for that at all. It's pure hand waving / wishful thinking.
Most of the volume of brains appears to be dedicated to process / control of autonomous systems, not decision making, creativity or problem solving.


----------



## Vertigo (May 4, 2015)

I agree with the specification aspect and yes we understand it about as well as we understand sentience which of course is so often automatically associated with AI but is actually a whole other topic.

And also the size thing; as you say most brain capacity is concerned with the running the body not higher thinking. If size is all it takes than sperm whales should be the smartest creatures on Earth.


----------



## Ray McCarthy (May 4, 2015)

Vertigo said:


> yes we understand it about as well as we understand sentience


sentience: Cogito ergo sum
A very knotty problem. Also known as "self awareness". It's not clear at all to me how Sentience and Intelligence is related. Perhaps creatures of very little intelligence can be "self aware" (which is slightly testable, which is why I use the term rather than sentience). But can something be intelligent with no "self awareness"? I don't know.

You can fake basic "self awareness" in a machine or even on just a "chat bot" at the level where an animal appears to understand it is looking at itself in a mirror. But is a cat less self aware than an elephant (most cats will fail, most elephants pass the blob of colour on body + mirror test)?  Cats are not much interested in visual images that lack smell and noise, dogs OTH will react through double glazing or video at an animal, though they should only have to pay a B&W TV licence in UK (abolished here, here we only have one kind of domestic TV licence).

You can fake a lot of emotional responses in a chat-bot, but that's no use for an AI, or indeed anything unless it's a humanoid sex-bot or "pet companion". "Companion" simulators at the level of a pet can be useful, and exist marketed for lone old people in Japan, but actually have no intelligence at all. 

People are good at fooling themselves, if they want to be, which is the flaw on much research of brain, animal behaviour (people either see us as machines, or animals or give animals "human" motivations).


----------



## Dave (May 4, 2015)

Vertigo said:


> However I would like to address one of the concerns.


Your post was very interesting and made me think differently, however, in many circumstances where we might use intelligent machines in the future - places inhospitable to humans like other planets, nuclear reactors, bottom of the ocean - then it might be a good idea to give them a survival instinct and the ability to reproduce too. Much easier to send a single machine and have it multiply itself at the work site and if it has no sense of danger or any sense of being damaged then it isn't likely to last very long. So, I agree that it isn't a prerequisite for an AI but it is possible and maybe even likely, especially if we are creating Androids to replace humans in dangerous tasks.


----------



## Vertigo (May 4, 2015)

Yes I think I'd agree with you Dave, however if those instincts are programmed by us rather than just a corollary of intelligence then maybe they be can tuned so they don't compete with our own instincts! It's that self replicating one that always worries me, and provides SF authors with such excellent disaster fuel.


----------



## Mirannan (May 4, 2015)

Ray McCarthy said:


> But AI is impossible, inherently, till we actually can properly describe what intelligence is.  Understanding ourselves and possibly some animals (though they don't respond usefully to questions) is the starting point. Not the present so called "expert systems", current "evolutionary software", "neural networks" and all existing computer AI research. None of it is really about AI at all. But simulating responses to stimulations, using databases, and solving domain specific problems.
> 
> The starting point for any computer system or program is a clear specification. Give me one for Intelligence and I'll have a demo that works on any windows PC, probably in a few months. *Computer "power" or extra hardware is irrelevant! *If you knew how to do it, the inherent property of an artificial is system is that ALL would be equally "smart", only the response time would vary with "computer power".
> Response time is a poor indication of "intelligence".
> ...



Saying that AI is impossible until we can properly describe what intelligence is necessarily(IMHO) implies that humans can't be intelligent; unless, that is, one believes (as many do) that human intelligence or at least the capacity for it to develop (newborn babies aren't all that intelligent) was designed in by another, intelligent, entity. Given that humans are intelligent, it must be possible for an assemblage of computing devices to become intelligent if that isn't what you believe.

I was under the impression that the doctrine of vitalism was dead; apparently not.

As for the irrelevance of computer processing speed; well, I disagree. As an example, this time nothing to do with AI, it's been possible in terms of theory, for many years (probably since the early 1970s at least) to create a computerised prediction of the weather three days in advance. Unfortunately, at that time, the prediction would be of rather limited use because it would take a run time of ten years or so to generate it.

The point is that sapience needs the response time to be short enough to actually respond to the situation before it changes again.


----------



## Ray McCarthy (May 4, 2015)

Mirannan said:


> Given that humans are intelligent, it must be possible for an assemblage of computing devices to become intelligent


There is ZERO logic in that statement. We don't really know how people got intelligent, nor what exactly intelligence is. The only way we can have an Artificial Intelligence is to design. No mechanical Intelligence can self emerge.  Computer chips are 100% purely electronic miniaturised implementations of mechanical mechanisms. There is no property of them we didn't implement by design.

I've been programming casually since 1969, seriously since 1981. Designing computers since 1980.
Without engineers and programmers and a specification you just have a bunch of chips. Computers are more like mechanical calculators or a clockwork automata than ANYTHING biological.
There are zero self emergent properties related to computers. The hardware and software has never ever evolved on its own. It's all 100% developed by intelligent and educated humans.

Computers are purely deterministic calculating machines. You can make a slower copy of any computer in theory with mechanical relays. Or even purely mechanical parts.



Mirannan said:


> As an example, this time nothing to do with AI, it's been possible in terms of theory, for many years (probably since the early 1970s at least) to create a computerised prediction of the weather three days in advance. Unfortunately, at that time, the prediction would be of rather limited use because it would take a run time of ten years or so to generate it.


Yes, controlling a missile or a walking robot needs very rapid response. As you say that is nothing at all to do with AI. AI at even one year response time for something that takes us a couple of minutes would STILL be AI. Complexity and Power/Performance is irrelevant.
The fact there are applications that need supercomputers to be useful in REAL TIME, is irrelevant. You could PROVE a weather computer program performance that takes 10 years or 6 months for the 3 day forecast by knowing what happened over the 3 days. Then it's "only" engineering to speed it up.

Maybe we will have AI someday, but not by accident, not because of a more "powerful" computer, not because of a new programming language, not as a side effect of complexity. *If possible at all it will be three steps.*
1) Realisation of what Intelligence is.
2) Create a design to embody A.I.
3) Implement it.
If we figure it out the first version will be buggy and unstable. Then it will be improved. Then features will be added. It will peak in usability and performance and then get worse due to marketing.
Someone will produce an open source version that initially will be awkward to bootstrap.



Mirannan said:


> The point is that sapience needs the response time to be short enough to actually respond to the situation before it changes again.


No, not at all in a proof of concept demo.
Self Awareness, AI, Sapience etc in a machine only has to work at all. You're confusing a lab proof of concept with a commercially viable product.
A) Proof of Concept (we have no idea how to get there)
B) Commercial products with decent response time.

If we can do (A) at all, (which can't be proved or disproved) then eventually and definitely we can do (B).

I read SF as a Kid from mid 1960s and then took up programming and computers partly with a goal of A.I. My first real short story at school was about an A.I. 
A.I. is still firmly in the realm of soft SF /Fantasy.


----------



## Stephen Palmer (May 5, 2015)

I really hope you guys read my upcoming _Beautiful Intelligence._ July from Infinity Plus Books!

End of advert.


----------



## Stephen Palmer (May 5, 2015)

Ray McCarthy said:


> sentience: Cogito ergo sum
> A very knotty problem. Also known as "self awareness". It's not clear at all to me how Sentience and Intelligence is related. Perhaps creatures of very little intelligence can be "self aware" (which is slightly testable, which is why I use the term rather than sentience). But can something be intelligent with no "self awareness"? I don't know.



I think it can - a kind of autistic intelligence. The consciousness problem is the knotty problem - the Hard Problem as philosophers tend to call it - but it is the most interesting of all. For me, the question of intelligence is almost a mathematical, indeed, trivial problem. Consciousness (sentience, if you like) is the biggie.


----------



## Ray McCarthy (May 5, 2015)

@Stephen Palmer


Stephen Palmer said:


> the question of intelligence is almost a mathematical, indeed, trivial problem.


Send me the formula and spec and I'll implement it.

We can split 70:30 You:Me


----------



## Dave (May 5, 2015)

I thought everything could be described by mathematics - but a description is not the same understanding the mechanism.


----------



## Overread (May 5, 2015)

I find it interesting that we ask this question on a fantasy and sci-fi site when I would guess that many here can realise that the answer is already yes. That we will create a form of artificial intelligence.

Why - because for the very same reason that some people own pets; that others talk to their plants; that heroes have loyal mounts and fantastic familiars. Humanity is in a sense a very lonely species.

We can communicate with ourselves and with each other, we can communicate with many other species to a lesser or greater degree, but we are still the only ones like ourselves on the planet. As such I think that we have a drive within ourselves in general to fill that gap and that AI is a way we will aim to fill it. 

Maybe from different angles - some will be pushing for better and better computers; others might go at it from biology (super smart labrats!). But I think the end result is the same, we will seek to create a companion for ourselves - one with function, but one to fill that void. 



Interestingly I suspect any AI is likely going to be a merging of biological and technological advance. Indeed it would not surprise me if within the next few decades we might see a rise of "bio-computers". Indeed that could be the next huge leap in computing power.


----------



## Ray McCarthy (May 5, 2015)

Dave said:


> I thought everything could be described by mathematics


Not today, sadly. Perhaps we don't know enough.



Overread said:


> That we will create a form of artificial intelligence.


We will try, at present, despite hype, we not made any progress at all. None.  It's a good idea to try.
There is no assurance that we will ever succeed.



Overread said:


> we might see a rise of "bio-computers"


What might they be?
If you mean computer components made from biological elements, they are too large, too short lived and too slow.
Biological systems are very slow and achieve results from massive parallelism and self repair.
Computers and biological systems have almost nothing in common.



Overread said:


> Interestingly I suspect any AI is likely going to be a merging of biological and technological advance


You could have an "A.I." in the sense of editing existing genetic material and breeding a mutant creature good at some task (drug searchs?). But no artificially engineered genetics based creature can do problems or tasks in the sense a computer does. 
The point is what task or problem are we trying to solve that can't be done by existing creative people or programmed computers. People are really easy to make, though 20 year latency from "order placement".


----------



## BAYLOR (May 5, 2015)

When we put our minds to it, we do anything, even create An A I. Is it a good idea ? The only way to find out is to turn it on once we've created it . Of course, before turning it  on ,  it might be a good idea to put in safeguards. Like Asimov's three Laws.


----------



## Ray McCarthy (May 5, 2015)

Nah, just be ready to pull the power plug.


----------



## Stephen Palmer (May 6, 2015)

"Number crunching."


----------



## Ray McCarthy (May 6, 2015)

Stephen Palmer said:


> "Number crunching."


But Computer Chess doesn't use A.I.
IBM's watson isn't A.I. either.

Not a single A.I. demo exists.

The "Touring Test" is now believed not to be a test of A.I. Actually it was just a passing thought of Alan Touring at the time, with no rigour at all, unlike the "Touring Machine" related papers on computable problems. 



> It may be that this process goes on for a long time; ever more impressive thresholds will be crossed by computers such as Watson. Progress towards AI, but never the achievement of real artificial intelligence itself.



However, we should keep trying. We might figure how to do A.I.
We need though too, to define exactly what "real A.I." would actually be useful for that can't otherwise be done by a Machine, or more economically and better by a human.


----------



## galanx (Jan 10, 2016)

And then there's Roko's Basilisk, one of the strangest thought experiments ever- at least as far as the result went.

This showed up at the LessWrong site, a group supposedly devoted to developing more rational thinking.
These guys basically strongly believe that AI will happen, and that humans will then be uploaded into this in a form of the Singularity- which will mean the end of death, hunger, disease, war etc. Now it's obvious that the AI has to be 'friendly'- a special term at LW (they have a lot) but which basically means "supporting human flourishing" otherwise we could be in trouble a la "The Matrix".

They also believe that the program of you in the AI is the same as you now, because it is an exact replica- therefore anything that happens to it will be something that will actually happen to you in the future. 

If the AI is friendly, it will want the uploading of  of human beings to happen as soon as possible to cut short the amount of human suffering. Therefore it will want people living now to contribute as much as possible to the development of the AI- but how can a machine from the future influence your behavior now? By threatening to subject your reconstructed self (which will actually be identical to the you of now) with infinite torture, if you don't get to work now.

But of course this thought trap only works if you know about it- the AI can't justify torturing for something you could have done but didn't if you haven't learned about it. Follow?

This seems extremely far-fetched, but when it was first posted it was enough to send the site-owner into a state of panic- he immediately banned any mention of it in hopes of keeping LessWrong's supporeers from being punished by the AI in the future.



> "there is the ominous possibility that if a positive singularity does occur, the resultant singleton may have precommitted to punish all potential donors who knew about existential risks but who didn't give 100% of their disposable incomes to x-risk motivation. ... So a post-singularity world may be a world of fun and plenty for the people who are currently ignoring the problem, whilst being a living hell for a significant fraction of current existential risk reducers (say, the least generous half). You could take this possibility into account and give even more to x-risk in an effort to avoid being punished. But of course, if you're thinking like that, then the CEV-singleton is even more likely to want to punish you... nasty. "
> ......
> Commenters quickly complained that merely reading Roko's words had increased the likelihood that the future AI would punish them — the line of reasoning was so compelling to them that they believed the AI (which would know they'd once read Roko's post) would now punish them _even more_ for being aware of it and failing to donate all of their income to institutions devoted to the god-AI's development. So even looking at this idea was harmful.....


Roko's basilisk - RationalWiki

a simpler description
The Most Terrifying Thought Experiment of All Time


----------



## Vertigo (Jan 11, 2016)

That sounds suspiciously like the creation of a religion to me!


----------



## BAYLOR (Jan 12, 2016)

Vertigo said:


> That sounds suspiciously like the creation of a religion to me!



In the 1985 tv eries Otherworld There was a religion called The Church of Artificial Intelligence.


----------



## logan_run (Feb 25, 2016)

We need to stop global warming  and terroism this should be a priority not a.i.


----------



## steelyglint (Feb 25, 2016)

logan_run said:


> We need to stop global warming  and terroism this should be a priority not a.i.



No-one knows how to stop those things - maybe AI does.

.


----------



## Vince W (Feb 26, 2016)

steelyglint said:


> No-one knows how to stop those things - maybe AI does.
> 
> .



Because Skynet the AI would realise that it's caused by humanity and decide our fate in an instant.


----------



## steelyglint (Feb 26, 2016)

If 'AI' is a new superior lifeform, conscious and aware, it would also have superior ethics and morality - and as everything put into it to create that lifeform came from humans, its morality would be ours. It would cure the problems, not embark immediately upon a programme of annihiliation of its creators.

.


----------



## Stephen Palmer (Feb 27, 2016)

If it is conscious and aware, as you say, then it would do whatever it liked. That's kinda the point.


----------



## steelyglint (Feb 27, 2016)

Stephen Palmer said:


> If it is conscious and aware, as you say, then it would do whatever it liked. That's kinda the point.



Yes, but 'whatever it liked' is unlikely to be immediately psychopathic. Everything put into its creation would have come from us, so unless the humans doing that were all psychopaths it would be 'post-human'.

The only comparable we have is the evolution of Homo Sapiens Sapiens from its more ape-like progenitor. There doesn't seem to be any evidence that this new, higher lifeform immediately began a pogrom on it parents and their species.

What would an AI gain from wiping us out, other than an empty world and a very lonely existence from then on?

.


----------



## Ray McCarthy (Feb 27, 2016)

Mercedes announced this week replacing Robots with Humans.  The robots take too long to program and are less flexible. At the minute no-one knows what real AI might be like. Expert Systems and so called Neural Networks (Nothing to do with real biological brains) are not AI, except for marketing.


----------



## galanx (Feb 28, 2016)

steelyglint said:


> Yes, but 'whatever it liked' is unlikely to be immediately psychopathic. Everything put into its creation would have come from us, so unless the humans doing that were all psychopaths it would be 'post-human'.
> 
> The only comparable we have is the evolution of Homo Sapiens Sapiens from its more ape-like progenitor. There doesn't seem to be any evidence that this new, higher lifeform immediately began a pogrom on it parents and their species.
> 
> .



Talk to any Neanderthals about that lately?


----------



## steelyglint (Feb 28, 2016)

galanx said:


> Talk to any Neanderthals about that lately?



Very probably. It is well known that many people possess genes from a Neanderthal ancestor. Current theory suggests a long-term merging until no 'pure' Neanderthals remained. No genocide required.

.


----------



## Stephen Palmer (Feb 28, 2016)

steelyglint said:


> Yes, but 'whatever it liked' is unlikely to be immediately psychopathic. Everything put into its creation would have come from us, so unless the humans doing that were all psychopaths it would be 'post-human'.
> 
> The only comparable we have is the evolution of Homo Sapiens Sapiens from its more ape-like progenitor. There doesn't seem to be any evidence that this new, higher lifeform immediately began a pogrom on it parents and their species.
> 
> ...



I didn't say it would immediately wipe us out or be a psychopath. I said it would do whatever it liked.


----------



## Serendipity (Feb 28, 2016)

Ray McCarthy said:


> Mercedes announced this week replacing Robots with Humans.  The robots take too long to program and are less flexible. At the minute no-one knows what real AI might be like. Expert Systems and so called Neural Networks (Nothing to do with real biological brains) are not AI, except for marketing.



A few factoids...

1. While Expert Systems and Neural Networks are two tools of programming, there are also Fuzzy Systems and Evolutionary Algorithms.
2. Research is ongoing in a combination of the four types - there have already been some very powerful results in controlling real world systems to deal with the vagaries of the weather and humans - which could (not saying would) lead to true AI
3. Research into human brains has shown that there is some kind of switch in their development that makes children about four or five develop self-awareness, consciousness, call it what you will. If we can work out what that switch is based on, then we can model it in computers. 
4. Mercedes are actually following in the footsteps of Toyota who are replacing AI with humans in their 'management chain' for similar reasons. 

This is where I slink away... meow!


----------



## steelyglint (Feb 28, 2016)

Stephen Palmer said:


> I didn't say it would immediately wipe us out or be a psychopath. I said it would do whatever it liked.



No, you didn't. You did also say 'that's kinda the point', but there are two points to this discussion: the creation of AI will be a 'quantum leap' into a better future for humans, or the creation of AI will see it eradicate humans from the Earth because movies say it will be 'evil'. The 'quantum leap' party has my vote - the 'evil' party seems to have seen to many Terminator movies.

.


----------



## Ajid (Feb 28, 2016)

Problems only occur when you don't imprint the three laws of robotics onto the positronic brain. If they'd have done that with skynet The Terminator would have been a very different movie.

In all seriousness I think we are far further off A.I than we realise, we don't even fully understand how human intelligence works. We will no doubt discover many other usefull and world changing technologies on the way so I'm all for striving for it.


----------



## Ray McCarthy (Feb 28, 2016)

Serendipity said:


> Research into human brains has shown that there is some kind of switch in their development that makes children about four or five develop self-awareness, consciousness, call it what you will. If we can work out what that switch is based on, then we can model it in computers.


Actually, that's pure supposition 
1) Children are self aware much earlier, 
2) We don't know when
3) The supposed switch is hypothetical
4) Just because we know how something in biology works, means nothing about programming computers!



Serendipity said:


> Fuzzy Systems and Evolutionary Algorithms.


Just jargon!
FS are just programs using probabilistic weighting of data rather than If or Case statements. Nothing to do with intelligence. In fact humans are terrible at estimating probability instinctively, i.e. other than working it out mathematically.
EA are nothing to do with Evolution, or indeed inteligence. It's just a programming

All "so called" AI jargon is deliberately misleading. It's about marketing, investment and grant funding.


----------



## Ray McCarthy (Feb 28, 2016)

steelyglint said:


> The 'quantum leap' party has my vote


Yes I agree, though obviously I can't see ANY evidence that we are any nearer AI than in 1947. There is no reason to assume that "the Three laws of robotics" apply, they are fiction to allow examining how to break them and autonomous devices already break them. Conversely all of the dystopian SF about AI is simply fantasy, more so than Iain M. Banks "paradisical" AI.
All AI stories are at the level of fairies, wizards, etc. As a programmer for over 30 years and involved in so called AI research, I can say ANYTHING with AI in it in the media or even University Computer depts is jargon. No intelligence at all, just applied programming and databases.


----------



## Stephen Palmer (Feb 29, 2016)

Serendipity said:


> 3. Research into human brains has shown that there is some kind of switch in their development that makes children about four or five develop self-awareness, consciousness, call it what you will. If we can work out what that switch is based on, then we can model it in computers.



It's not a switch and it all happens around 18 months.


----------



## michaelhall2007 (Mar 25, 2016)

Mirannan said:


> Let's assume for the moment that strong AI is possible, probably by some sort of self-organising and perhaps even pseudo-evolutionary process. I'm inclined to think that it is, particularly if one is not religious; for an example of a hugely complex network of nanomachines and computing devices with sapience, look in the mirror. Which means it's possible.
> 
> And that comes to the question of whether we (humanity or some subsection of it) should try to build AI at all - assuming that we actually have the choice; commercial and other pressures may force us into it. The latest mobile phones are a most unreasonable facsimile of intelligence, and I've seen video of robots with the ability to generalise from the particular, albeit in a rather crude and limited way. (Deducing correctly that an object of a different shape, not seen before, is a chair was the demo I saw.)
> 
> ...


It already does exist. In fact my TV got married last week. The reception was amazing.


----------

