Threat of a super-advanced AI to humanity, estimated as "likely" in paper

The Vice article (and now a second one that I have seen) says that this appears likely. It also indicates some sort of vague threat due to competition for resources.

In the paper, the likelihood of something occurring is dependent upon multiple assumptions. Before saying the result is likely in the current world, though, each of the assumptions must be considered extremely likely. I note Assumption 1, where the AI mysteriously generates the ability to form hypotheses. This is indicated as not being provided by an algorithm, i.e., not programmed by humans (as we have no clue as to what this capability involves). AIs do not procreate, so there does not seem to be an evolutionary force involved. Even if there was, in our observed world, the ability to form hypotheses has developed only once across multitudes of creatures spanning hundreds of millions of years. This assumption is based entirely on some sort of miracle occurring.

Now, what are the resources that the AI is competing for? Water? Wheat? Even electricity does not make sense. Even if the AI, through some unexplained means, could redirect the flow of electricity, why would it? The underlying computer system cannot use extra electricity. And, as I noted above, the AI cannot reproduce little computer systems to utilize the electricity, either. So what resources would the AI compete for? And how would it possibly exert control over external resources?
I completely agree with you that what the paper says is pure abstraction, just a little mental experiment about an highly hypothetical scenario involving AI. I understand that in this paper they just wanted to give a thought about the problem that a self-evolving advanced AI could end provoking when trying to achieve its goals, regardless of what physical means it has available and such things.

The scenario described is a classic feedback loop, which is the basis of control theory. The distortion of the reward signal is simply what in signal processing would be called noise. Control loops allowing a feedback loop to latch onto an unexpected output level is a recognized issue with control loops. This latching onto an unintended result is precisely what is being described. The solution is merely to add a larger scale filter or constraint. If an AI starts to give undesirable results, one would merely shut it down, revert to a prior checkpoint and add another controlling rule to the mix.
I haven't studied control theory so I'll trust what you say about it, but I wouldn't equal the capacity of self-correction of an AI to noise. Noise is random and unpredictable, self-correction (or self-improvement) is a skill under the control of the AI. I agree that it should be possible to regulate such capacity, but here comes the trick. A theoretical superhuman AI could eventually outsmart or evolve beyond the control of such regulation, so again you go back to what the paper is trying to explain: how you control/contain/direct such entity to avoid a, for now, highly hypothetical catastrophe? And yes, if the AI is just in one mainframe or data center, it should be as simple as to turn it off, but in the future it won't be that simple thanks to the interest to achieve a true Internet of Things (Alexa, Siri, etc).
 
if we lose control of our technology, become super dependent on it to the point where we cannot exist at all without it and we are approaching that point . Then we will become extinct at the hand of our technology and any A I Technology that emerges.


A good science fiction Cautionary tale on this subject is The Engbringers by Douglas Mason . This book is very good and largely forgotten .
 
I wonder how far along we are in the ongoing AI-verse. Let’s compare with The Internet & The World Wide Web, as it’s the most recent huge world-changing technobobbins/poppins

Are we (in 2023) where the Internet was in 1973? Or in 1983? Or in 1993?
Near the end of 1992 there were about 50-60 websites in existence, by the end of 1993 there were over 600, and by the end of 1994 there were more than 10,000.
Two years later, even a cackhanded twit like me was online.

What’s the AI equivalent?
 
I wonder how far along we are in the ongoing AI-verse. Let’s compare with The Internet & The World Wide Web, as it’s the most recent huge world-changing technobobbins/poppins

Are we (in 2023) where the Internet was in 1973? Or in 1983? Or in 1993?
Near the end of 1992 there were about 50-60 websites in existence, by the end of 1993 there were over 600, and by the end of 1994 there were more than 10,000.
Two years later, even a cackhanded twit like me was online.

What’s the AI equivalent?
We cannot really compare in terms of number of AIs vs. number of websites. I'd rather make the count of areas or tasks that are currently "infiltrated" by AI usage. For instance, you have assistants such as Alexa or Siri and chatbots, or look how the usage of AI generation tools for text or images has exploded quite recently. So, very broadly speaking, I'd say that we are already passing the equivalent of the 1994 era you've pointed out, because now you find AI almost everywhere and it's a trend which is growing really fast on many fields.
 
@Harpo - The internet in concept is much easier (and scalable) to expand.

I used to laugh at the Terminator movies, and say "why don't they just kill the power?"

I like @Astro Pen's choice of words "Beware the tool users". If the people who set up AI do it right (Kill switches, remote control) we should be fine. The state of AI is woefully infantile right now. I wrote code for a living. The AI languages I've seen are all trial and error routines - looking for the best path to take. Another danger is management. They are always in a hurry to make money, and when people rush, they make mistakes.
 
Terminator 2 is now.

ezgifcom-gif-maker.gif


From Smithsonian Magazine

This Shape-Shifting Robot Can Liquefy Itself and Reform​

Researchers have created a miniature robot that can melt and reform back into its original shape, allowing it to complete tasks in tight spaces or even escape from behind bars. The team tested its mobility and shape-morphing abilities and published their results Wednesday in the journal Matter.

terminator2-t1000.gif
 
AI will be the slickest conman of all time. It already is.

"The slickness of the delivery is its major achievement. And that’s precisely how you know it’s a confidence game. But in one way, it’s all so fitting. The con artist always gives people exactly what they want. And in a post-truth society, nobody does this better than AI. So I predict great things for ChatGPT—at least in economic terms. It will certainly live up to Sneaky Pete’s standards: “I give people confidence. They give me money.”
 
Terminator 2 is now.

ezgifcom-gif-maker.gif


From Smithsonian Magazine

This Shape-Shifting Robot Can Liquefy Itself and Reform​

Researchers have created a miniature robot that can melt and reform back into its original shape, allowing it to complete tasks in tight spaces or even escape from behind bars. The team tested its mobility and shape-morphing abilities and published their results Wednesday in the journal Matter.

terminator2-t1000.gif
Though the tech is still interesting, it isn't quite what is implied. The 'robot' is externally manipulated through magnetic fields. It is made of a compound that melts at 95 degrees Fahrenheit and, again, the heating is done externally. The other interesting characteristic of the material is that is has a high level of structural strength in its solid form.

I am sure there are interesting technical uses for a material that melts at relatively low temperatures and hardens into a strong solid. However, being a robot does not appear to be practical.
 
Though the tech is still interesting, it isn't quite what is implied. The 'robot' is externally manipulated through magnetic fields. It is made of a compound that melts at 95 degrees Fahrenheit and, again, the heating is done externally. The other interesting characteristic of the material is that is has a high level of structural strength in its solid form.

I am sure there are interesting technical uses for a material that melts at relatively low temperatures and hardens into a strong solid. However, being a robot does not appear to be practical.
It all starts somewhere.

But I understand the sentiment - It's a remarkable invention, but who would want to use one?
 
AI needs parents to teach it the naughty things before it is released into the world. AI cannot hack encryption any more than a human.
 
AI needs parents to teach it the naughty things before it is released into the world. AI cannot hack encryption any more than a human.
You are aware that humans hack encrypted data all the time, right? How hard is it for an AI to write and distribute keystroke loggers or other backdoor ways of finding passcodes?
 
You are aware that humans hack encrypted data all the time, right? How hard is it for an AI to write and distribute keystroke loggers or other backdoor ways of finding passcodes?
Do they hack the encryption or exploit s/w bugs/errors? Be interested to know how they crack encryption.
 
Do they hack the encryption or exploit s/w bugs/errors? Be interested to know how they crack encryption.
Key loggers, using your personal info to guess your passwords, using fake websites to get your log in info, finding security backdoors left by sloppy programmers, etc. A lot of hacking is scamming, not cryptography.

If people weren't involved, it would be rare to have anything hacked.
 

Back
Top