All that can go wrong with AI - AI Risks

I understand what you are saying CC but exactly the same was said when the 'computer age' started in the '50s and '60s. What actually happened was that new industries started up. As a minor example, some of the wealth generated allowed the leisure / foreign holiday industry to blossom. My parents had never been abroad (apart from my dad's brief visit to Dunkirk!) but now it's commonplace.

Edit: Looks like I've repeated what Christine was saying more eloquently than I have.
This level of automation is actually good news for countries with declining populations (China, Germany and Japan ), elsewhere, it is not such great news. On a global level, well-paid jobs are actually scarce.
 
The godfather of modern AI google style is now having second thoughts about the use of AI.

Warning of AI’s danger, pioneer Geoffrey Hinton quits Google to speak freely

Perhaps the real problem is that he had to quit google before he could talk about it.

"Hinton is also worried about a proliferation of false information in photos, videos, and text, making it difficult for people to discern what is true." Considering the fact that this is already happening, it is strange that he would say that "Google has acted very responsibly." Yeah, right.

He was in a position to see how his AI (google bought it from him) was being used on a daily basis from day 1, but it's only after 10 years that he now has second thoughts about using it?

If it really bothers him, maybe he should use all the money he got from developing AI for the corporate world, to build a technology to counteract it. Once upon a time talking about something brought awareness of the subject and some kind of positive action happened. Now talk does nothing except sell more copy.
 
This is a list of the risks I perceive we currently have, with the current level of Artificial intelligence. Feel free to add more risks and discuss them.

Current risks:

Information - the proliferation of fake news backed with images and voice.
Just tackling the first one: To some extent the original scientific method already contains an answer to that, because it is always possible for a good fraudster to create something that looks convincing. Hence the standard of proof is not actually well recorded data on any medium, be it parchment or encoded single-use hardrives - it's independent repetition: Can I go out into the real world with my own equipment and see what you claim to have seen? And how tolerant of variations - e.g. moving away from a particular location, a particular observing position - is the model of things that you're proposing (and hence how useful is it)?

This is where a lot of conspiracy theories fall down anyway: They rely, for their 'proof', on things that you don't actually see when you replicate their claims, or on people blindly testing in a very specific way that conceals a flawed technique (e.g. flat earthers claiming that things don't disappear behind the horizon, just disappear with distance, and showing videos of 'vanished' objects being recovered to view by optical magnification - when in fact the objects they showed hadn't crossed the horizon at all, they'd just got smaller than the video's un-magnified resolution could show - and when the test is repeated with objects far away enough to have actually crossed the horizon they cannot be recovered to view). Hence they mainly rely on people not actually bothering to check things themselves, or not properly grasping the nature of the tests. A lack of 'doing the maths' and properly understanding what the predictions of various scenarios are also comes up a lot.

So one response to the risks of AI and deepfake is to push for a culture change: Rely less on media and the reports of others, and emphasise the need and benefits of actually repeating the proofs offered, and examining the models and techniques - in other words, doing the leg-work yourself wherever possible: Did Trump really do lines of coke with Biden, Nixon and Obama, off the resolute desk in the oval office, like the YouTube video shows? Well can we actually see him doing coke and hanging with those guys at other places? Who saw them later that day and were they jumping? What other witnesses besides the video, are there? What results are there from a narcotics swab of the desk? Etc.
 
Just tackling the first one: To some extent the original scientific method already contains an answer to that, because it is always possible for a good fraudster to create something that looks convincing. Hence the standard of proof is not actually well recorded data on any medium, be it parchment or encoded single-use hardrives - it's independent repetition: Can I go out into the real world with my own equipment and see what you claim to have seen? And how tolerant of variations - e.g. moving away from a particular location, a particular observing position - is the model of things that you're proposing (and hence how useful is it)?

This is where a lot of conspiracy theories fall down anyway: They rely, for their 'proof', on things that you don't actually see when you replicate their claims, or on people blindly testing in a very specific way that conceals a flawed technique (e.g. flat earthers claiming that things don't disappear behind the horizon, just disappear with distance, and showing videos of 'vanished' objects being recovered to view by optical magnification - when in fact the objects they showed hadn't crossed the horizon at all, they'd just got smaller than the video's un-magnified resolution could show - and when the test is repeated with objects far away enough to have actually crossed the horizon they cannot be recovered to view). Hence they mainly rely on people not actually bothering to check things themselves, or not properly grasping the nature of the tests. A lack of 'doing the maths' and properly understanding what the predictions of various scenarios are also comes up a lot.

So one response to the risks of AI and deepfake is to push for a culture change: Rely less on media and the reports of others, and emphasise the need and benefits of actually repeating the proofs offered, and examining the models and techniques - in other words, doing the leg-work yourself wherever possible: Did Trump really do lines of coke with Biden, Nixon and Obama, off the resolute desk in the oval office, like the YouTube video shows? Well can we actually see him doing coke and hanging with those guys at other places? Who saw them later that day and were they jumping? What other witnesses besides the video, are there? What results are there from a narcotics swab of the desk? Etc.
Alas, I fear most of the people will not bother to do the proper research. Just at tweets get replicated over an over again, fake news spread like fire in a dry savanna.
 
Alas, I fear most of the people will not bother to do the proper research. Just at tweets get replicated over an over again, fake news spread like fire in a dry savanna.

Yes, but don't we already have that problem even without AI content? I'm finding it hard to believe the generic popular content I see. When Youtube pushes a video that shows some guys rescuing a dog from a river, I instinctively wonder if they threw it in just so they could make the video. When I see a video of a "karen" acting up in a shop, I always wonder if s/he is in on the whole thing (viral videos make money!). That shark approaching the diver; is it even real or a decent video edit? Even when I don't click on this garbage, Youtube and Facebook keep pushing it at me. If half of it is fake and you can't tell which half, then all of it is pointless. Just this morning on Facebook there is a supposed battle in Ukraine - basically a meaningless mash up of library footage and battle footage (who knows from where and when) with some kind of narrative bolted on. I understand that - in terms of content - Google and Facebook are just the conduits by which ANYONE can supply ANYTHING to the world. But have they no pride? No shame?

I think its time for the platforms to be held accountable for content they 'push'. I mean, if I want to see the latest video from Fred Bloggs I can search for it and find it. Fair enough. But if they push something into my feed and it is false/malicious/libelous then how can they not be responsible? Food for thought.
 
Yes, but don't we already have that problem even without AI content? ..... Food for thought.
Oh yes, absolutely - but that is why I say it's a culture change needed. A change emphasising and encouraging doing the research yourself, and a deeper change to enable more people to do so, more often.
 
Yes, but don't we already have that problem even without AI content? I'm finding it hard to believe the generic popular content

Ah... but there is always room for improvement... and the amount... , the amount will grow by orders of magnitude because what required thought and editing will now be available by a few keystrokes ( or words whispered into a microphone).
 
Yes, but don't we already have that problem even without AI content? I'm finding it hard to believe the generic popular content I see. When Youtube pushes a video that shows some guys rescuing a dog from a river, I instinctively wonder if they threw it in just so they could make the video. When I see a video of a "karen" acting up in a shop, I always wonder if s/he is in on the whole thing (viral videos make money!). That shark approaching the diver; is it even real or a decent video edit? Even when I don't click on this garbage, Youtube and Facebook keep pushing it at me. If half of it is fake and you can't tell which half, then all of it is pointless. Just this morning on Facebook there is a supposed battle in Ukraine - basically a meaningless mash up of library footage and battle footage (who knows from where and when) with some kind of narrative bolted on. I understand that - in terms of content - Google and Facebook are just the conduits by which ANYONE can supply ANYTHING to the world. But have they no pride? No shame?

I think its time for the platforms to be held accountable for content they 'push'. I mean, if I want to see the latest video from Fred Bloggs I can search for it and find it. Fair enough. But if they push something into my feed and it is false/malicious/libelous then how can they not be responsible? Food for thought.
How could you doubt it?
 
I think that it's definitely getting harder to discern fact from reality, and with the advancement of technology it is only likely to get more tricky. We tend to choose our 'voices of truth' and believe them; otherwise, what do we do? I pretty much rely on Wikipedia for being the only reliable source for information, but some of the more obscure pages of which I have a good knowledge (such as 'retro' games) I have noticed incorrect/incomplete information; so it isn't 100% infallible.

Still, it's better than the past when all we had to rely on were newspapers, our memories and 'the man in the pub' for most of what we knew.

Of course with AI the internet is 'their patch', so we as humans are at a major disadvantage. We rely so much for the internet and computers, that it has become integral to how we survive from day to day. Imagine if all the computers suddenly stopped working or the internet shut down. Simply 'pulling the plug' when/if it gets out of control will be impossible.
 
Ah... but there is always room for improvement... and the amount... , the amount will grow by orders of magnitude because what required thought and editing will now be available by a few keystrokes ( or words whispered into a microphone).
My gut feeling is that we need to give people the means (and motivation) to test some things themselves, without referring to the online world at all, from a young age. That at least gives us all a baseline shared reality to work from. The amount of effort it takes an individual to establish that baseline is a constant - it doesn't get harder as online fakery gets easier - and while it's not utterly immune to fakery it's a lot more resistant than a screen, especially over many repetitions by many individuals.

The amount of potentially misleading data online will certainly grow exponentially, but some of it at least can be, in whole or part, verified against the baseline if enough people take the trouble to establish it. Some stuff will need a judgement call - is the source credible, can we find a witness, etc. And some we will have to take on faith - if we wish to keep using the online world we are going to have to accept having some doubt in what we get from it, although we can do what we can to minimise it.

What I worry about is people who flatly refuse to look at the real world and at least find an agreement regarding how it acts most of the time - I once had a conversation with a person who was adamant the stars do not rotate over the north and south celestial poles over 24 hours. "The stars over MY house are stationary, I'm sure". I pointed out they could just sit outside for a few hours and check themselves. They responded that they wouldn't give anyone the chance to feed them illusions that way, and they were sticking to what XXXX YouTuber had told them! It was... psychotic, is the word, I think: A true break from reality had been made.

That scared me - being unable to check reality is one thing, but refusing too - I'll be blunt, it bothers me that this person is probably allowed to drive, vote, and (being in the US) own a gun just 'for self protection'. Truly. What will their reaction be on the day reality comes in some way they can't refuse to look at? That kind of thinking is where a lot of tragedies come from.
 
School exposes people to several things besides a set of approved facts. One of the things is how to research a subject, which in reality is teaching people how to think. You can get by blindly memorizing everything because you do get judged quite a bit on what facts you know. Doing research is a bother for a number of people. Why should a person ever have to look at multiple sources. Knowing the answer (by any means possible) is a sign of adulthood for a number of people. Forgetting about researching a subject is a goal of a number of people once they graduate from school. Not having to research a subject is also a sign of adulthood for a number of people. It's even possible that being able to pay people to answer questions about things you don't know much about is taken as another sign of adulthood.
 
I actually asked ChatGPT what people could do to curb spread of misinformation:


"The emergence of deep learning technology has made it possible for AI models to mimic human speech with a high level of accuracy. While this technology has many beneficial applications, it can also be misused in harmful ways, such as impersonating someone for fraudulent or malicious purposes. Here are some ways we can prevent bad things from happening as a result of AI-generated voices:

Legal regulations: Governments can enforce laws that prohibit the use of AI-generated voices for fraudulent purposes. For instance, in the US, California has passed a law that makes it illegal to use deepfakes or AI-generated videos or audios for political purposes without disclosing that they are fake.

Education: Raising awareness among the general public about the existence and potential impact of AI-generated voices can help people recognize the signs of a fake voice. Education campaigns can also provide information on how to report suspicious activity.

Authentication techniques: Voice biometrics and other forms of voice authentication can be used to verify the identity of the person speaking, making it harder for bad actors to use AI-generated voices to impersonate someone else.

Development of countermeasures: Researchers are working on developing algorithms that can detect and identify AI-generated voices. These countermeasures can help to identify and flag any content that is not authentic, reducing the impact of fake voices on society.

Ethical guidelines: Technology companies can adopt ethical guidelines that promote the responsible use of AI-generated voices. Such guidelines can ensure that these technologies are used for legitimate purposes and not for malicious or fraudulent activities.

In conclusion, preventing the misuse of AI-generated voices requires a multi-faceted approach that involves legal regulations, education, authentication techniques, the development of countermeasures, and ethical guidelines. These efforts can help to mitigate the potential risks associated with this technology and promote its responsible use."

Nice try Chat GPT, but you don't fool me!
 
I understand what you are saying CC but exactly the same was said when the 'computer age' started in the '50s and '60s. What actually happened was that new industries started up. As a minor example, some of the wealth generated allowed the leisure / foreign holiday industry to blossom. My parents had never been abroad (apart from my dad's brief visit to Dunkirk!) but now it's commonplace.

Edit: Looks like I've repeated what Christine was saying more eloquently than I have.

There;s a great book that came out several years ago that addressed this a little - think it was about the luddite fallacy or something - anyway, it said that the luddite fallacy was a fallacy so long as computers were not able to outperform human labour in every way. The difference is that in a short while they may well be able to - in every capacity, intellectually, physically, sexually, even.

Even in the area we think is most human - emotional care - they may well be able to do it better than real humans, with humans bonding with machines, like in Her. There have already been cases of humans claiming to have fallen in love with AI.

I hope not and that there's still some way of ethically implementing AI that maximizes the benefits and minimizes the risks.
 
School exposes people to several things besides a set of approved facts. One of the things is how to research a subject, which in reality is teaching people how to think. You can get by blindly memorizing everything because you do get judged quite a bit on what facts you know. Doing research is a bother for a number of people. Why should a person ever have to look at multiple sources. Knowing the answer (by any means possible) is a sign of adulthood for a number of people. Forgetting about researching a subject is a goal of a number of people once they graduate from school. Not having to research a subject is also a sign of adulthood for a number of people. It's even possible that being able to pay people to answer questions about things you don't know much about is taken as another sign of adulthood.
Yes, though I have to hope we can make some improvement in that regard.
 
I am tempted to add: The attitude that adulthood means not having to research your facts is what you get when so many treat admitting 'I don't know' as a sign of weakness or failure, and value appearing confident and decisive here-and-now over waiting and checking properly. And these people are called 'upper management' - and they also build a world at odds with reality, for themselves, where those comfortable-for-them traits are encouraged. But I may just be a bitter old ex-engineer :D
 

Similar threads


Back
Top