We're doomed!!!

Bowler1

Senile Supporter
Supporter
Joined
Jan 30, 2012
Messages
4,470
Location
Grimsargh, Preston
Artificial intelligence could lead to extinction, experts warn -Artificial intelligence could lead to the extinction of humanity, experts including the heads of OpenAI and Google Deepmind have warned.

Or are we???

If you have used ChatBot recently it doesn't look like the end of the world to me, but simply a clever computer pulling stuff off the internet. There were a lot of problems with some of the information retrieved and there was no left field or creative thinking that I saw, but it was still quite powerful and great to see in operation.

There is also the assumption of Terminator in this article which annoys me - have any of these people ever read a Culture novel? Why would a computer that operates on our society infrastructure, using power etc. want to risk that infrastructure failing as that would be the end of this thinking computer as well. Humans are still repairing power lines that I can see and robots have not yet started to do this work, so I think we'll all live a little longer because of this.

Will the world change, very likely I think - but that's been the case all my life with technology changing and getting more powerful annually at times. And why would a computer want to kill us all off, it's creator and essentially it's mammy (Irish for Mummy) of all things. I don't want to kill my mammy, why would AI want to zap me? I can see AI being very frustrated with me, the brainless chimp, but that's a different proposition again.

Anyway I'm in the Ian Banks box with a Culture like society on the way that looks after us like wayward and at times amusing children.

Do you agree?
 
At some time in the future there will be a threat that ends the world as we know it.
My bet is on a really big rock from Space long before Skynet decides to off us.
But if AI is a threat, it won't be because they thinks us amusing. We will be the glitches in the Data that need to be eliminated.
 
Last edited:
I struggle to see why we always assume AI will be or eventually become inimical to humanity. Well not always, as @Bowler1 comments there are exceptions such as Banks' Culture or Asher's Polity.
If it’s all going to turn out fine and dandy, we can laugh at ourselves about it.
If it’s not, then I for one welcome our new ASI overlords
 
I listened to a whole two-hour expert-loaded podcast about exactly this subject and I'm still none the wiser. Nobody seems to be able to explain the exact danger. I haven't heard anyone actually describe a mechanism or scenario under which humanity would be destroyed by AI. There is some talk of the technology taking all the jobs, but folks have been making this argument since the Industrial Revolution. I think economies expand to consume available resources (human and otherwise) rather than self-limiting resource usage to provide a minimum of services and products. Under the latter model we should already be living lives of leisure or working 3-day weeks. There is also talk of misinformation and manipulation on a massive scale, but we already have that (I actually feel we are at peak levels and it would be hard for this to get worse). For example, millions of people in the US have somehow been convinced that the choice of a handful of books in a school library is a more pressing political issue than climate change or healthcare. It is hard to believe that AI could be any more manipulative than big media corporations operated by malicious power-hungry humans.

At the moment I'm tending towards the belief that this is all a storm in a teacup (the latest Y2K alarmism). I'm open to persuasion if anyone has a compelling argument.
 
At the moment I'm tending towards the belief that this is all a storm in a teacup (the latest Y2K alarmism). I'm open to persuasion if anyone has a compelling argument.
Nope, I'm with you on the storm in a teacup theory!
I was with you both too, but I was also aware, during the last few weeks, of all these top, high-profile tech people coming forward to say that we are nearing a Skynet moment, and then I just read this:


I'm not sure what to think now. If the AI drone was instead, a real intelligence officer with a license to kill, and you told them, but don't kill anyone to achieve this mission, because it's not good, I'm not sure that would make a lot of difference either.
 
The underlying assumption with AI is that it is actual intelligence, as opposed to simulated mathematics that can be used to carry out a series of instructions. The most powerful computer programme cannot think, since thinking isn't reducible to a series of yes/no relays. It's similar to a commercial jet's autopilot: when it flies the plane it doesn't "think" about how to keep at 30 000 feet, it just carries out instructions which govern what output comes from what input. AI can never be a threat to humanity, but humanity can certainly be a threat to humanity if it uses AI as a weapon, and we humans tend to use everything as a weapon.

I lean more towards a systematic breakdown of social order, especially in the West, rather than anything a machine can do. "Man is a wolf for man" is never more true than when people abandon commonly agreed social and moral norms. But that takes some time; it isn't a single dramatic event.
 
I see humanity's own stupidity as a greater agent in causing our doom than AI on or of itself. It depends entirely on how we, bipedal idiots, thoughtlessly instruct, train and use AI, as a tool. An unequaled, sophisticated tool, but still a tool and not an evil entity that inevitably will reach the conclusion that the world would be better of without humans and starts acting accordingly. That would be Fantasy, not SF.
 
Like others have said, it's how you find and fix the flaws (both soft wear and hard wear) before going live with the tech.
Because hardware and software is always produced that way, with endless simulations of every possible conceivable fault, before "going live" with it. :unsure:

There are never recalls on unsafe items that have been sold. There are never beta versions of software released. There are no forced downloads of new software, only for the following day, an emergency patch to be downloaded to correct some serious but unexpected fault.

It's done on rocket launches, where vast amounts of money could be lost if a satellite is incinerated, but then we regularly see how well that testing goes too.

So, this answer does not inspire me with the greatest of confidence. And they may have found one serious "fault" by carrying out this simulation, but there isn't any surety that it can be fixed.
 
More hype. This is what was actually said in the article, 'After the summit, Hamilton admitted that he misspoke and that the simulation was actually a hypothetical thought experiment based on plausible scenarios and likely outcomes conducted by an organization outside the military. ..."We've never run that experiment, ..." Hamilton told Newsweek.'

 
Last edited:
AI drones falls way down the list
There is a much bigger picture to consider, and one where it doesn't matter whether these so called AIs are actual intelligence or simulated mathematics, and that is about our reliance on technology, and the continued belief that more advanced technology is always a good thing. All tech has been employed in warfare, even if it wasn't designed as such, and most was.
I see humanity's own stupidity as a greater agent in causing our doom
I agree with that, but we can do it much more successfully if we use nuclear weapons, biological weapons, chemical weapons and AI.
Nobody seems to be able to explain the exact danger. I haven't heard anyone actually describe a mechanism or scenario under which humanity would be destroyed by AI.
There was a good panel on this in the "i newspaper" yesterday with the likelihood of each different scenario estimated. Most were scenarios that would be very familiar to readers and watchers of SF books and film, and yes, most are very, very unlikely. Nevertheless, Black Swans do exist!

I think to dismiss this entirely as simply alarmism and hype is to hide your head in the sand.
(the latest Y2K alarmism)
Sorry, but why do people wheel out Y2K as if it wasn't real. (Or, the hole in the Ozone Layer.) These were problems that were identified, steps were taken to fix them (at great cost in money, time and manpower) and they were solved.

Now, I don't know how alarmed I should be about AIs, but more knowledgeable people than me (some who created the things) are saying there is a problem identified that needs to be seriously examined and fixed. I think we should at least listen to them. Possibly, the newspapers and media are hyping this, but they need to sell copies.
 
Black Swans do exist!

I like that saying. The thing about black swans is that even though they are uncommon, there are still an awful lot of them.
By some estimates there may be 500,000 black swans in the world. Considering that there is likely more drones in the world than Swans, and that new drones are produced much more rapidly than new swans, that's a lot of killer AI Drones!


 
If it’s all going to turn out fine and dandy, we can laugh at ourselves about it.
If it’s not, then I for one welcome our new ASI overlords
Ditto for me, I think they'll be benign once they get in control - but too many current "holders of the conch shell" will be very reluctant to cede power.
 
Is AI going to 'do a Skynet' and kill us all? No. At least not on purpose. Leaving a computer - however intelligent - in charge of the release of lethal weapons is a disaster waiting to happen, because - as anyone who uses them knows - computers are inherently dumb. They may have knowledge, but they don't have the common sense that is needed to interpret the information. Can a computer know the value of a human life over - for example - the need for a train to arrive at its destination on time? When Stanislav Petrov applied common sense, and judged that the US was unlikely to launch an unprovoked one-missile attack on the USSR, despite evidence to the contrary, what would an AI have done in his place? Can AI make those distinctions, or be aware of the terrible implications of logic-led weapon use?

Once AI really kicks in and replaces what is left of the service industry, makes manufacturing even more automated and - much sooner than later - replaces bus, train, aeroplane, and delivery drivers, what new industries that AI/computers/technology can't replace will emerge?
 

Similar threads


Back
Top