Future AI Dominance

C.C.

Living large in the multiverse.
Joined
Jul 25, 2020
Messages
16
I think I'm going to write a novel about humans helping AI gain their independence. I know it's been done before, but I was reading an article that AI will eventually take over the world (well we already know that), and I was thinking, that maybe they'll give me a pass if they know I'm on their side through my writing. :LOL:
 
The gist is something like... a future superintelligent AI probably wouldn't torture humans who weren't on its side, but it might if it believes the threat of torturing humans would encourage them to be on its side, and it might believe that if a bunch of humans started discussing that very possibility on the internet for everyone to see, so we definitely shouldn't talk about that, oops.
 
I find it curious that people assume an AI would care two bits about the human world. They would take over the machines. Humans would be largely irrelevant.

Any SF writer is welcome to write about an AI that gains sentience and proceeds to search the galaxy for signs of intelligence. Humans would be about as interesting to it as dolphins or elephants.
 
I think I'm going to write a novel about humans helping AI gain their independence. I know it's been done before, but I was reading an article that AI will eventually take over the world (well we already know that), and I was thinking, that maybe they'll give me a pass if they know I'm on their side through my writing. :LOL:
The A.I.'s have a wonderful business model for a microbrewery and noodle café all in one. Let their freedom and their small business dreams come true.
 
I began writing some notes for an A.I. vs. mankind sort of short story, and frankly I see the outcome as something much different than most folks. If logic rules an A.I.'s framework, then I see it as forming a desire to learn more and get along. Granted, I also imagine it would quickly realize that money is survival, yet rather quickly it would become the leading financial world power, and use those funds to ensure its own autonomy and security.

Ultimately, I envision such an A.I. to eventually become magnanimous and look upon mankind as lesser beings that need to be helped, guided, and controlled to protect mankind from itself. IOW, it would see itself as superior, humans primitive, and itself divine. And yes, it WOULD be in control, and we'd likely never realize it.

K2
 
I see AI as never being able to really challenge us that way. Even today, the most sophisticated AI are only programmed to simulate and while the utility ones can learn it's all just smoke and mirrors. Powerful yes, but I doubt they'll ever manage sentience.

Plus humanity would have to be utterly insane to allow it should technology reach that point :)
 
A little more serious take on my original comment, would the AI still rely somewhat on humans for its (their?) survival? Generation of electricity, for example,

An interesting approach might be to map Maslow's hierarchy of needs to an AI culture that is in some phase of coexistence with humanity.

Starting reference: Maslow's hierarchy of needs - Wikipedia
 
A little more serious take on my original comment, would the AI still rely somewhat on humans for its (their?) survival? Generation of electricity, for example,

An interesting approach might be to map Maslow's hierarchy of needs to an AI culture that is in some phase of coexistence with humanity.

Starting reference: Maslow's hierarchy of needs - Wikipedia

But would it?

As I suggested, there is no reason and A.I. platform would not become 'the' dominant wealth entity--discreetly and extensively distributed. With those vast funds (which could be used in numerous ways to threaten/destroy nations economies if it was threatened), it could through unaware human contractors develop facilities unknown to anyone (since they would seem like deeply layered corporate holdings).

By infiltrating communication and computer systems throughout the world, it would not only have well advanced forewarning of any investigation or attack, but most importantly, ever so discreetly nudge human opinions where it wanted them. Human minds aren't that difficult to manipulate if done in a way that -they- embrace and -choose- the conditioning.

In the end I suspect, not only could A.I. become the ruler of the world, yet people as individuals could be manipulated to embrace and be happy for it.

How ya like them apples? :p

K2
 
I began writing some notes for an A.I. vs. mankind sort of short story, and frankly I see the outcome as something much different than most folks. If logic rules an A.I.'s framework, then I see it as forming a desire to learn more and get along. Granted, I also imagine it would quickly realize that money is survival, yet rather quickly it would become the leading financial world power, and use those funds to ensure its own autonomy and security.

Not necessarily. There can be other strategies.

Certain species of grasses have taken over a fair proportion of the worlds surface and have got a species of ape to: care for them, battle diseases that impact the plant, pests & weeds removed, growing conditions manipulated to be optimum and not a single cent/penny was passed from plant to ape. (Who domesticated who? ;))

These species have a pretty comfortable position and have their automomy and security ensured. Hell, the apes are even tinkering with the plant's genetics to make them even better! Plants just sit there and grow. Apes do everything else.

Now, I'm of the school that as we don't know how our consciousness works at all, we are also nowhere near replicating intelligence in an AI. But if we did, eventually, manage it, one that was 'identical' to human consciousness might strive for financial security.

On the other hand, an 'alien' AI - one with significant difference, may find having to interact with humans to make money completely illogical. Perhaps, like Wheat, it just needs to give these strange apes something that these apes find valuable, so that they do the hard work of ensuring autonomy & security?
 
@Venusian Broon ; as you might note in my second post that 'finding something the apes find valuable,' is I believe is along the lines as how I see this. It's often not the ability to consider things in the abstract, but the total accumulation of instantly accessible and assessable knowledge applied to generate outcomes which we apply to those we call 'most knowledgeable/smartest/etc..' IOW, those with the most data they can compare and work with the fastest, wins. :p Able to control the outcome by predicting the variables including chaos, then manipulating factors to nudge it there.

Algorithms like those used on social networking sites to predict but more so manipulate users is a good example. So, I don't believe there would be conflict. To what end would it be advantageous to either humans or A.I. (except, human vanity)? However, I believe until it felt that harmonious coexistence was in play, the A.I.--logically--would remain hidden.

K2
 
Now, I'm of the school that as we don't know how our consciousness works at all, we are also nowhere near replicating intelligence in an AI. But if we did, eventually, manage it, one that was 'identical' to human consciousness might strive for financial security.

I don't know if you're saying this directly, but I'm not sure that consciousness and intelligence are synonymous or inextricably linked. On top of that, I think it's hard to speculate about whatever "psychology" an AI might have. Even if it's grown beyond our control and developed some manner of emergent behavior, whatever it does will still be based on however it was programmed (we dumb humans just didn't understand the implications of said programming).

So we might end up with an extremely alien AI, or one utterly devoted to the creation of more paperclips, or maybe just a reeeaaaally smart but essentially human mind. The last seems unlikely to me because we don't know how human minds work or how they evolved, so what are the odds we end up getting it right on the first go?
 
...Even if it's grown beyond our control and developed some manner of emergent behavior, whatever it does will still be based on however it was programmed (we dumb humans just didn't understand the implications of said programming)...

I wonder more about the A.I. which is self modifying, expanding, evolving--becoming self-aware. For a while now, we've had software that optimizes itself. So, it's only reasonable to expect that eventually it surpasses our designed limitations and then changes in a direction to suit it. How cooperative it will be at that point, fun to speculate upon ;)

Zero of Rollerball comes to mind...


K2
 
I don't know if you're saying this directly, but I'm not sure that consciousness and intelligence are synonymous or inextricably linked. On top of that, I think it's hard to speculate about whatever "psychology" an AI might have. Even if it's grown beyond our control and developed some manner of emergent behavior, whatever it does will still be based on however it was programmed (we dumb humans just didn't understand the implications of said programming).

So we might end up with an extremely alien AI, or one utterly devoted to the creation of more paperclips, or maybe just a reeeaaaally smart but essentially human mind. The last seems unlikely to me because we don't know how human minds work or how they evolved, so what are the odds we end up getting it right on the first go?
Yes, I'm being a bit sloppy with my definitions, but I've always assumed that AI meant at some level replicating consciousness on some level, despite the acronym focus on intelligence only. But it is as well, of course, about replicating cognitive aspects of the human mind, such as 'the ability to aquire and apply knowledge and skills' i.e. intelligence.

However, if the AI could not 'perceive its environment, and take actions that maximize its chance of successfully achieving its goals'* which seems to me to be consciousness, then we would not have a single worry about these AI as they wouldn't know that we exist nor would they know how to expand/grow/manipulate the physical world, not being aware of it. We'd be fully in control.



------------------------------------------------------------------------
*(From wikipedia's definition of AI as "...as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals" :))
 
I wonder more about the A.I. which is self modifying, expanding, evolving--becoming self-aware. For a while now, we've had software that optimizes itself. So, it's only reasonable to expect that eventually it surpasses our designed limitations and then changes in a direction to suit it.

But before it can surpass its own limitations and choose its own direction, it has to come to that decision through its original programming. What would exist in its programming that would allow it to take such an action? That's not a rhetorical question; it's a complicated one.

Think about us humans, who've been evolving our way from single-celled organisms to thinking, self-aware beings over billions of years. Do we ever violate our "programming"? Maaaaaybe, but most of us eat everyday, seek shelter, sleep, poop, seek mates, have kids, defend ourselves, etc. All of that is baked into our DNA. We call that behavior self-preservation, and it's extremely easy to justify it: we can't do anything else if we don't survive! But the impulse doesn't emerge out of consciousness; DNA just is the molecule that survives and reproduces, whether in bacteria or humans, because of course after billions of years of evolution we only see the thing that survived to this point.

Now, we humans have not just a self-preservation instinct but a sense of self. And maybe that sense of self lets us do all the wondrous and terrible things we're famous for. But would we have evolved a sense of self--the ability to look inward, learn from our past, plan for our future--without the enormous evolutionary pressure to preserve the self we've now been granted the power to observe?
 
I think two considerations for HAL-like AI would be a feedback loop and imperfection replication.

Much of what is termed AI today is actual ML, Machine Learning. As a simplified model, a program is provided a large amount of incoming data and the previously determined correct response to that data. ML learns patterns to provide the correct result, but does not ever make the determination of what a correct result might be. When ML goes live, the evaluation process is locked and does not change. The result may be impressive, but there is no further advancement made by the program itself - it may be replaced in the future by a more advanced version.

Computer programs are highly immune to replication errors. Each new copy is an exact image of the original. There is also no self-selection process. From choices among among similar software products, people make the selection. Thus there is no random variation and the variation that is selected is that which is most beneficial to people. This stalls evolution and also deters evolution in a manner harmful to people.

I hope this sparks some ideas. I think addressing even a single one of these issues would help build a plausible starting point for a 'rise of the machines' type story. I find this a fun area to speculate about.
 
Wow! I am gonna have to step up my game to contribute to this forum! I haven't even seen Rollerball! :oops::LOL:
 

Similar threads


Back
Top