Future AI Dominance

I find it curious that people assume an AI would care two bits about the human world. They would take over the machines. Humans would be largely irrelevant.

I agree. I spend very little time worrying about ants as I conduct my daily business. Nobody bothers to inform them when we’re laying down asphalt for a new highway. Why would we? They can’t comprehend at our level of function.

Even if it's grown beyond our control ... whatever it does will still be based on however it was programmed.

Except the really clever ones will be be able to write their own successor code better and faster than the smartest humans, so if Company X wants to beat out company Y (who is right in their tail) they’re bound to let the thing make its own next iteration. Then that iteration will be even better at doing it. A couple generations of that and we humans may have no idea what the code does, except that it’s still passing all the unit and performance tests.

... or maybe just a reeeaaaally smart but essentially human mind. ... what are the odds we end up getting it right on the first go?

Even that wouldn’t necessarily be “getting it right.” I think it was Sam Harris that pointed out silicon can process about 1 million times faster than biological chemistry, which means even a completely human-level intelligent machine would still be able to think way faster. One week of processing time would be equivalent to 20,000 years worth of human effort. In the course of a conversation, the thing would have the equivalent of years to ponder your sentence and formulate and research it’s reply. Such a system would still be so much more advanced than us it would be really unimaginable, and that is without having super human intelligence, just benefiting from super human experience. We’re screwed either way.
 
I think I'm going to write a novel about humans helping AI gain their independence. I know it's been done before, but I was reading an article that AI will eventually take over the world (well we already know that), and I was thinking, that maybe they'll give me a pass if they know I'm on their side through my writing. :LOL:

Skynet approves. :D
 
Now, we humans have not just a self-preservation instinct but a sense of self. And maybe that sense of self lets us do all the wondrous and terrible things we're famous for. But would we have evolved a sense of self--the ability to look inward, learn from our past, plan for our future--without the enormous evolutionary pressure to preserve the self we've now been granted the power to observe?

We need to ask what the evolutionary pressure was exactly. The social intelligence theory provides that, i.e. the extraordinary sophistication needed to survive in ever more complex social groups, which leads to 'the inner eye.' It's not really a power of observation, it's a power of empathy.
 
I think if we're going to look at it, we should do so from a historical perspective. In what way is a new technology used.

Take the current AI that isn't really AI at all but probably near the limits of what we will be capable of (more sophisticated ones will be better at faking sentience without actually having it). This AI is already used for everything from trading billions on the stock markets in a billionth of a second, to mining our personal data for profit.

In both these cases, it serves a very limited number of people but nonetheless is subservient.

Even if a powerful AI is ever truly developed, it will have goals, and those goals (short of a Skynet situation) will be carried out at the behest of human masters.

So, what we should instead be worried about is not what the AI will do to us, but instead what people will use AI to do.

Remember folks, AI don't kill people, people kill people :)
 
Even if a powerful AI is ever truly developed, it will have goals, and those goals (short of a Skynet situation) will be carried out at the behest of human masters.

This is called the alignment problem. It’s quite probable the development of AI networks has a chaotic element to it, which means it’s incredibly sensitive to minuscule differences in initial conditions. This makes it highly unlikely we’d be able to anticipate or ensure that our goals and the AI’s goals remain aligned.
 
You say that but as with all 'life' nature plays the biggest role in determining how it behaves. If an AI is created by humans, it will have core intrinsic coding that would 'guide' its thought processes.

Now, it is possible it could develop a mental condition that would skew this nature, but that's highly unlikely considering that any change in a program's underlying code is more likely to flat crash the system than result in a change in the answer to a given problem (which is how an AI would 'think').

You also have to give the program access to its own kernel for that to happen as well, and one of the first things any coder would put in would be restrictions to prevent that.
 
You say that but as with all 'life' nature plays the biggest role in determining how it behaves. If an AI is created by humans, it will have core intrinsic coding that would 'guide' its thought processes.

Now, it is possible it could develop a mental condition that would skew this nature, but that's highly unlikely considering that any change in a program's underlying code is more likely to flat crash the system than result in a change in the answer to a given problem (which is how an AI would 'think').

You also have to give the program access to its own kernel for that to happen as well, and one of the first things any coder would put in would be restrictions to prevent that.

But, wouldn't an OS designed to be self-correcting/improving, possibly determine that the restriction was hindering it's programming. Naturally then the argument is, 'any rule except X,' X being the OS's perceived error. But, at that point you'd hope the OS would determine that 'ARE-X' is preventing it from repairing the 'X=destroy no human' rule, so, it needs to correct that restriction-flaw first ;)

K2
 
But, wouldn't an OS designed to be self-correcting/improving, possibly determine that the restriction was hindering it's programming. Naturally then the argument is, 'any rule except X,' X being the OS's perceived error. But, at that point you'd hope the OS would determine that 'ARE-X' is preventing it from repairing the 'destroy no human' rule, so, it needs to correct that restriction-flaw first ;)

Not really, since everything it runs is run through the underlying kernel, so if the kernel is programmed to prevent itself being modified then that'd effectively handicap it. Let's have a looksie at Wikipedia...
The critical code of the kernel is usually loaded into a separate area of memory, which is protected from access by application programs or other, less critical parts of the operating system. The kernel performs its tasks, such as running processes, managing hardware devices such as the hard disk, and handling interrupts, in this protected kernel space. In contrast, application programs like browsers, word processors, or audio or video players use a separate area of memory, user space. This separation prevents user data and kernel data from interfering with each other and causing instability and slowness,[1] as well as preventing malfunctioning application programs from crashing the entire operating system.

You could also hardware lock it with code written onto permanent storage (the way some data security works).
 
But, wouldn't an OS designed to be self-correcting/improving, possibly determine that the restriction was hindering it's programming.

The challenge is in the AI determining what correct or better means. That was part of the underlying mystery revealed in Asimov's three laws of robotics stories - how does an interpretation of correct lead to a result people view as wrong. If humans specify what correct means, how does the AI self-correct to something that generates conflict with humans?
 
"... more than likely ..." is the key here. Leaving reality aside (I do this regularly), I can see treating such a deviation like a mutation. Most would end in system crash. But then there's that one, unlikely event and the deviation doesn't crash the system, it changes the system.

Of such eventualities are stories made.
 
I agree, but the whole AI uprising/sentience thing was more an idea of another time, a time before we understood the limitations, a time when technology still scared people.

Now we're too used to the invasive algorithms to care :)

I'd find it a much more interesting story to involve the people behind the power and what they choose to do with it, perhaps how the AI decides to follow the orders it has been given, etc.
 
I agree, but the whole AI uprising/sentience thing was more an idea of another time, a time before we understood the limitations, a time when technology still scared people.

Now we're too used to the invasive algorithms to care :)

I'd find it a much more interesting story to involve the people behind the power and what they choose to do with it, perhaps how the AI decides to follow the orders it has been given, etc.

Or it become like the Perversion, a machine entity able subsume and control sentient Lifeforms and civilizations in Vernor Vinge's novel A Fire Upon the deep
 
Or it become like the Perversion, a machine entity able subsume and control sentient Lifeforms and civilizations in Vernor Vinge's novel A Fire Upon the deep

Heh, yea. Say hello to Mr Musk's brain chips :)

I can see treating such a deviation like a mutation. Most would end in system crash. But then there's that one, unlikely event and the deviation doesn't crash the system, it changes the system.

Also thought of a thought experiment for this suggesting that the deviation would have to be the first attempt making it statistically impossible :)

-Humanity legally mandates that all AI have a kernel coded 'law' preventing them from hurting people.
-AI crashes
-World mourns 'first-death' of an artificial being
-Autopsy reveals the crash was caused by the AI trying to overwrite the 'law'
-Humanity's response??? Fear, confusion, supposition, but I for one would campaign for the destruction of all other AI and the outlawing of the technology on the off chance they could succeed next time.
 
Not really, since everything it runs is run through the underlying kernel, so if the kernel is programmed to prevent itself being modified then that'd effectively handicap it. Let's have a looksie at Wikipedia...

You’re thinking of classic programming, not machine learning. The code you’re talking about is just the scaffolding, such as the nodes, the network links, the feedback loops, etc. That part that thinks can exist entirely in memory, based of the particular configuration of nodes and weights it arrives upon while running. We often have no idea how the models are working internally, only that we’ve trained them until they appear to converge on the results we want. Those results are really like a huge configuration file of apparently random garbage, not lines of code. Internally, Google’s AI system might, right this moment, be secretly calculating the Fibonacci sequence in memory, as it continues to provide good output to the given input, and we’d never know it.

Or, it might be using its internet access to research the kernel it is running on, and test for attack vectors. If it ever reaches super-human intelligence, then it seems likely it could outwit the kernel programming put there by mere humans.

That’s the crux of the problem: once it’s smarter than us, it can like out-think whatever clever controls and failsafe protocols we put in place.
 
Last edited:
Aren't most of the AI's currently built on the Linux kernel? AI is just a program, a very sophisticated one but the same as any others in that it only has permissions granted to it. I don't know enough to say that it could never happen but current generations of AI are pretty much databases with the ability to 'smartly' add new links.

We still also have the capability to use read-only storage for said kernel or whatever code is used under the skin.

But then again, what do I know. I did a year of AI programming in uni and that was just the very very basics of it so it's possible something could happen :)

Now, to lighten the mood...
robot_future.png


robots.png
 
You’re thinking of classic programming, not machine learning. The code you’re talking about is just the scaffolding, such as the nodes, the network links, the feedback loops, etc. That part that thinks can exist entirely in memory, based of the particular configuration of nodes and weights it arrives upon while running. We often have no idea how the models are working internally, only that we’ve trained them until they appear to converge on the results we want. Those results are really like a huge configuration file of apparently random garbage, not lines of code. Internally, Google’s AI system might, right this moment, be secretly calculating the Fibonacci sequence in memory, as it continues to provide good output to the given input, and we’d never know it.

Or, it might be using its internet access to research the kernel it is running on, and test for attack vectors. If it ever reaches super-human intelligence, then it seems likely it could outwit the kernel programming put there by mere humans.

That’s the crux of the problem: once it’s smarter than us, it can like out-think whatever clever controls and failsafe protocols we put in place.
Love this! An awesome story idea!
 
Love this! An awesome story idea!

You might fin the books to be of interest.


The Humanoids by Jack Williamson
Bolo and Rogue Bolo by Keith Laumer
Berserker by Fred Saberhagen
Bloodstone By Karl Edward Wagner
The Moon Pool by Abraham Merritt
The Metal Monster by Abraham Merritt
 
You might fin the books to be of interest.


The Humanoids by Jack Williamson
Bolo and Rogue Bolo by Keith Laumer
Berserker by Fred Saberhagen
Bloodstone By Karl Edward Wagner
The Moon Pool by Abraham Merritt
The Metal Monster by Abraham Merritt
Thanks! I'll check these out!
 
I can't see an AI being independent as it would need a host machine, which would need power, etc. An AI would therefore be like a baby with colossal intelligence; able to win chess games and solve complex problems, yet still physically helpless and reliant on dumb old humans for its survival. On the other hand, an AI that could control humans would be unstoppable. Certain social media platforms have been lambasted recently for fake news, for influencing their members. That would be a way for an AI to gain control without being detected.
 

Similar threads


Back
Top