Not the least of which is the generation of taxes.I think that it's in all government's interests to keep people occupied with work for lots of reasons.
Not the least of which is the generation of taxes.I think that it's in all government's interests to keep people occupied with work for lots of reasons.
This level of automation is actually good news for countries with declining populations (China, Germany and Japan ), elsewhere, it is not such great news. On a global level, well-paid jobs are actually scarce.I understand what you are saying CC but exactly the same was said when the 'computer age' started in the '50s and '60s. What actually happened was that new industries started up. As a minor example, some of the wealth generated allowed the leisure / foreign holiday industry to blossom. My parents had never been abroad (apart from my dad's brief visit to Dunkirk!) but now it's commonplace.
Edit: Looks like I've repeated what Christine was saying more eloquently than I have.
Not the least of which is the generation of taxes.
Just tackling the first one: To some extent the original scientific method already contains an answer to that, because it is always possible for a good fraudster to create something that looks convincing. Hence the standard of proof is not actually well recorded data on any medium, be it parchment or encoded single-use hardrives - it's independent repetition: Can I go out into the real world with my own equipment and see what you claim to have seen? And how tolerant of variations - e.g. moving away from a particular location, a particular observing position - is the model of things that you're proposing (and hence how useful is it)?This is a list of the risks I perceive we currently have, with the current level of Artificial intelligence. Feel free to add more risks and discuss them.
Current risks:
Information - the proliferation of fake news backed with images and voice.
Alas, I fear most of the people will not bother to do the proper research. Just at tweets get replicated over an over again, fake news spread like fire in a dry savanna.Just tackling the first one: To some extent the original scientific method already contains an answer to that, because it is always possible for a good fraudster to create something that looks convincing. Hence the standard of proof is not actually well recorded data on any medium, be it parchment or encoded single-use hardrives - it's independent repetition: Can I go out into the real world with my own equipment and see what you claim to have seen? And how tolerant of variations - e.g. moving away from a particular location, a particular observing position - is the model of things that you're proposing (and hence how useful is it)?
This is where a lot of conspiracy theories fall down anyway: They rely, for their 'proof', on things that you don't actually see when you replicate their claims, or on people blindly testing in a very specific way that conceals a flawed technique (e.g. flat earthers claiming that things don't disappear behind the horizon, just disappear with distance, and showing videos of 'vanished' objects being recovered to view by optical magnification - when in fact the objects they showed hadn't crossed the horizon at all, they'd just got smaller than the video's un-magnified resolution could show - and when the test is repeated with objects far away enough to have actually crossed the horizon they cannot be recovered to view). Hence they mainly rely on people not actually bothering to check things themselves, or not properly grasping the nature of the tests. A lack of 'doing the maths' and properly understanding what the predictions of various scenarios are also comes up a lot.
So one response to the risks of AI and deepfake is to push for a culture change: Rely less on media and the reports of others, and emphasise the need and benefits of actually repeating the proofs offered, and examining the models and techniques - in other words, doing the leg-work yourself wherever possible: Did Trump really do lines of coke with Biden, Nixon and Obama, off the resolute desk in the oval office, like the YouTube video shows? Well can we actually see him doing coke and hanging with those guys at other places? Who saw them later that day and were they jumping? What other witnesses besides the video, are there? What results are there from a narcotics swab of the desk? Etc.
Alas, I fear most of the people will not bother to do the proper research. Just at tweets get replicated over an over again, fake news spread like fire in a dry savanna.
Oh yes, absolutely - but that is why I say it's a culture change needed. A change emphasising and encouraging doing the research yourself, and a deeper change to enable more people to do so, more often.Yes, but don't we already have that problem even without AI content? ..... Food for thought.
Yes, but don't we already have that problem even without AI content? I'm finding it hard to believe the generic popular content
How could you doubt it?Yes, but don't we already have that problem even without AI content? I'm finding it hard to believe the generic popular content I see. When Youtube pushes a video that shows some guys rescuing a dog from a river, I instinctively wonder if they threw it in just so they could make the video. When I see a video of a "karen" acting up in a shop, I always wonder if s/he is in on the whole thing (viral videos make money!). That shark approaching the diver; is it even real or a decent video edit? Even when I don't click on this garbage, Youtube and Facebook keep pushing it at me. If half of it is fake and you can't tell which half, then all of it is pointless. Just this morning on Facebook there is a supposed battle in Ukraine - basically a meaningless mash up of library footage and battle footage (who knows from where and when) with some kind of narrative bolted on. I understand that - in terms of content - Google and Facebook are just the conduits by which ANYONE can supply ANYTHING to the world. But have they no pride? No shame?
I think its time for the platforms to be held accountable for content they 'push'. I mean, if I want to see the latest video from Fred Bloggs I can search for it and find it. Fair enough. But if they push something into my feed and it is false/malicious/libelous then how can they not be responsible? Food for thought.
Yes, well, obviously not everything on the Internet is fake.How could you doubt it?
Gee, I remember watching Top Cat as a kid.How could you doubt it?
My gut feeling is that we need to give people the means (and motivation) to test some things themselves, without referring to the online world at all, from a young age. That at least gives us all a baseline shared reality to work from. The amount of effort it takes an individual to establish that baseline is a constant - it doesn't get harder as online fakery gets easier - and while it's not utterly immune to fakery it's a lot more resistant than a screen, especially over many repetitions by many individuals.Ah... but there is always room for improvement... and the amount... , the amount will grow by orders of magnitude because what required thought and editing will now be available by a few keystrokes ( or words whispered into a microphone).
I actually asked ChatGPT what people could do to curb spread of misinformation:
"The emergence of deep learning technology has made it possible for AI models to mimic human speech with a high level of accuracy. While this technology has many beneficial applications, it can also be misused in harmful ways, such as impersonating someone for fraudulent or malicious purposes. Here are some ways we can prevent bad things from happening as a result of AI-generated voices:
Legal regulations: Governments can enforce laws that prohibit the use of AI-generated voices for fraudulent purposes. For instance, in the US, California has passed a law that makes it illegal to use deepfakes or AI-generated videos or audios for political purposes without disclosing that they are fake.
Education: Raising awareness among the general public about the existence and potential impact of AI-generated voices can help people recognize the signs of a fake voice. Education campaigns can also provide information on how to report suspicious activity.
Authentication techniques: Voice biometrics and other forms of voice authentication can be used to verify the identity of the person speaking, making it harder for bad actors to use AI-generated voices to impersonate someone else.
Development of countermeasures: Researchers are working on developing algorithms that can detect and identify AI-generated voices. These countermeasures can help to identify and flag any content that is not authentic, reducing the impact of fake voices on society.
Ethical guidelines: Technology companies can adopt ethical guidelines that promote the responsible use of AI-generated voices. Such guidelines can ensure that these technologies are used for legitimate purposes and not for malicious or fraudulent activities.
In conclusion, preventing the misuse of AI-generated voices requires a multi-faceted approach that involves legal regulations, education, authentication techniques, the development of countermeasures, and ethical guidelines. These efforts can help to mitigate the potential risks associated with this technology and promote its responsible use."
I understand what you are saying CC but exactly the same was said when the 'computer age' started in the '50s and '60s. What actually happened was that new industries started up. As a minor example, some of the wealth generated allowed the leisure / foreign holiday industry to blossom. My parents had never been abroad (apart from my dad's brief visit to Dunkirk!) but now it's commonplace.
Edit: Looks like I've repeated what Christine was saying more eloquently than I have.
Yes, though I have to hope we can make some improvement in that regard.School exposes people to several things besides a set of approved facts. One of the things is how to research a subject, which in reality is teaching people how to think. You can get by blindly memorizing everything because you do get judged quite a bit on what facts you know. Doing research is a bother for a number of people. Why should a person ever have to look at multiple sources. Knowing the answer (by any means possible) is a sign of adulthood for a number of people. Forgetting about researching a subject is a goal of a number of people once they graduate from school. Not having to research a subject is also a sign of adulthood for a number of people. It's even possible that being able to pay people to answer questions about things you don't know much about is taken as another sign of adulthood.
Thread starter | Similar threads | Forum | Replies | Date |
---|---|---|---|---|
The Wrong Bus (Very Short, in Need of Expansion) | Critiques | 14 | ||
J | Wrong Book | Book Discussion | 10 | |
An artist writes a chronicle... Wrong? | Writing Discussion | 12 | ||
Where Did We Go Wrong? | SFF Lounge | 11 | ||
U | Is there anything wrong with having three-four stories where the MC's dad is evil? | Writing Discussion | 18 |