The paper is essentially a thought experiment, worth doing given the increasing relevance of AI systems nowadays.
I don't think AI is quite as advanced as many people seem to think...
It depends on what kind of AI we're talking about:
narrow AI or
general AI. The first type is the only one we have today in real everyday use, very specialized in particular tasks, and yes is very advanced and getting better at a faster pace (broadly speaking). The second kind is the one we don't have yet, the human-like artificial mind, but it's theoretically achievable sometime in the (far?) future. For now, we can only get tiny samples of how things might look like down the road, such as
this robot with realistic racial expressions that now has its own voice even.
Based on my definition above, I would argue that AI has not actually been demonstrated yet...
I wouldn't tell that to the
Go champion who lost with Google's
AlphaGo some years ago. Did you know that know there's a more powerful version of that software called
AlphaGo Zero that can learn on its own without any kind of human input? It's narrow AI, yes, but is still a form of intelligence, hence it's been more than demonstrated.
One of my objections to Musk - frequently expressed on these forums - is my belief that full self driving technology requires AI...
Certainly, and a kind of AI that goes beyond narrow but doesn't really reach the level of general. Not Tesla, but others are doing real progress in this regard. I remember from a few months ago the news about a truck driving a long distance completely autonomously, and now in the US there are a couple of companies deploying autonomous cabs. And let's not get started talking about the drone technology being already in use (loitering munitions) or experimented by the most powerful armies in the planet.
On the other hand, it's clear that Musk has a problem with his ego that goes in detriment of his own ventures. Isn't he going into trial in the US because Tesla hasn't delivered the promised autodriving system?
It always puzzle me why people of the future would deliberately give an AI* access to world ending force...
What people of the future? There's only
The Machine!
However there are automated systems today that technically could wipe us out...
Yes, but those are just mechanisms that behave the same way everytime, unless they get degraded or break altogether. AI systems are something else beyond automation, they can learn and adapt to improve the execution of their assigned tasks, and here lies the problem the paper addresses: AI changes its behaviour, and this mutation can push it to go against us just to achieve its preestablished goals.
1: It feels like they have modelled their AI behaviour on humans...
I'd say that the basic behaviour is rather common in nature, not exclusive of humans.
2: There seemed to be an assumption that the AI would acquire absolute control of everything and humans would be "out-thought" at every step. That strikes me as a very simplistic scenario and ignores a whole bunch of routine challenges.
In the paper they also speculate with a "multiagent" scenario, which I think is the more realistic one. There was this experiment some time ago in which researchers connected two AI systems so they could talk to each other in a predefined language. Over time, the machines evolved their own more efficient language to communicate with each other, one the researchers they couldn't understand at all. Result, they disconnected the machines. Now imagine a century later, with all major systems managed very efficiently by advanced AIs that surely will be networked to each other... See where this could be going, right?
a: People are idiots who do stupid things often against their better interest, some or all of which might thwart the AI, but the AI has to somehow learn all the stupid (and self-harming) things people might do.
Of course, even the best AI will have some sort of limitations, but the question here is that the learning process of an
advanced artificial agent (as they call them in the paper) could turn the behaviour of the machine into something more outlandish and incomprehensible for us than the most stupid thing any human could ever conceive. And this is the crux of the problem.
b: AI systems, like people, are at the mercy of the random events of the world...
Of course, but if armies around the globe are already using autonomous drones, or drone taxis are being deployed in cities, is because all those issues are being figured out at an increasingly faster pace.
c: The AI is going to need to figure out self-repair, redundancy and all the other things that humans do to keep IT systems ticking over...
Yes, that will be one of the hardest parts to solve, but all those things will be required, for instance, for asteroid mining. Imagine a mining drone that is mining ore in some far out space rock. The drone could have spare parts to fix itself up to a point, or nanomachines able to repair/regenerate its hull or superstructure. This way you increase the time the robot is mining, reducing downtime and, more importantly, increasing the value extrated from the operation. To do all these things you'll need some good advanced AI handling everything, and a big economic interest fueling all this development, which right now is growing.
d: Can an AI be any better than humans at expecting the unexpected?
Narrow AIs are already better than us in concrete tasks, such as cancer detection, so I think is safe to assume that a true self-aware general AI could be literally superhuman in all aspects. Again, we're not there yet, not by a light year.
e: And my final niggle for the night - what if there is more than one AI in the race to wipe out humans?
I already mentioned the multiagent scenario before, but I'll add here that the AIs won't be in a race to wipe us out, they just will be competing to reach their goals in the best way they can. If humans get in the way it would be just by accident.
Returning to the post - what are the chances of Armageddon by a dumb AI - one not conscious, but following a poorly designed set of commands or design?
Nowadays, I'd say zero or close to it. In a future where most if not all of the relevant systems are managed by AIs... I don't know if an actual Armageddon would be possible in such situation, but if something went bad with one AI, you could have a lightning fast chain reaction spreading through the whole network that could, in principle, stop the most advanced parts of our civilization in its tracks for a while.
Yeah, If it ain't conscious, it's just a tool.
Yes, but a tool that can change its shape or behaviour on its own in ways you may not be able to predict, specially when talking about more advanced AIs.
I have read the Vice article and scanned through the underlying referenced article,
https://onlinelibrary.wiley.com/doi/10.1002/aaai.12064 and I felt that the Vice article was a quite distorted represention of the research.
Rather than distorted, I'd say sensationalized although without really straying that far from what the paper says.
I did not find anything close to advanced AI eliminating humanity.
It's in the page 6, section titled "Danger of a misaligned agent". There the researchers speculate about how the capacity of an AI to intervene "in the provision of its reward" can lead it to deliver "catastrophic" consecuences for us.
It wouldn't have to be conscious to notice that certain actions result in more access, and some of those actions could be violent...
That's it, the AI only worries about doing its job to the best of its abilities. In the paper they don't talk about concious AI, they just assume advanced agents capable of human or superhuman reasoning which doesn't really imply being concious at all.
Think about what people have done to the earth without any plans to destroy it on purpose. Now imagine a similarly unthinking intelligence loosed on our infrastructure.
That would be the multiagent scenario. Not just one, but several unthinking AIs handling our systems with good intent initially, but they could change their behaviour on their own in unexpected and dangerous ways for us.
I think it is best to step back and recognize how speculative the original paper is...
Certainly it is. As I said before, this is just a pure theoretical experiment.
Beyond this, I read what follows as applying basic Control Theory for out of control feedback loops. There is no need to involve an advanced AI component or any AI component to generate a cascading scenario.
That's not the subject this paper is about. This is a research into how the AIs capacity to mutate or adapt can lead them to provoke such scenarios, and also how to control that adaptability to avoid those potential disasters.
The Vice writer takes things a step further and imagines some sort of world-wide scope to the run away scenario perhaps involving 'resources' or resource manipulation affecting all of mankind.
Nope, the Vice writer uses what the researchers have put in their paper (as I've already pointed out before), although in a more direct and undeniably more "intense" way. He then uses this study as a starting point to talk about the bad consecuences of applying AI in certain systems such as mass survelliance ones.
I do not see a HAL in our future.
Or we may very well end up with a thousand of them. Progress is being made in AI, and it doesn't seem like its going to stop. When neuromorphic chips (or equivalent tech) become a reality, I think that's when we'll start to see some game changing progress towards general AI.
Agree. It is worth reading the original paper, or at least scanning it. Much more nuanced than the Vice article. Fascinating, nonetheless.
I have to admit that I just scanned the paper, but taking a look to research of this kind is helpful to understand where we're going with AI tech.
Whether or not AI will, can or even shall be a threat to humanity, it is in our own hands, our designs, programming and the power are stupid enough to willingly make available to those AI's...
The problem with AI tech is that is in its nature to escape from our control, not out of malice but as a natural consecuence of its evolution. If two rather simple AIs were able to generate their own little language to just talk to each other, what will be capable of the true general AIs of the future (if they ever happen)? On the other hand, yes, one little virus could remove humanity from existance, or a sun flare could fry
My wife's life is controlled right now by her smartphone, so I can envisage a scenario where the networks take over without anybody realising until way too late
There are some out there who believe that there's a mind growing within the Internet. I don't think that's the case, although not so long ago I read some news about the CEO or executive of a networking company (maybe Cisco?) talking about applying AI to manage networks in a more efficient manner. So, yeah, maybe we're getting there inch by inch...