B.
I'm afraid that will ever be the case. If something can be done and there's gain to be made from it, it will be done. And there is little point trying to proscribe such development as it will only move to a part of the world that has not proscribed it and there will always be such places. Cloning humans, for example; as soon as it can be done it will be done no matter how hard some will try to stop it. It's not a matter of moral approval, I don't like the idea, but there will always be people who figure morality doesn't apply to them.The greatest failure of humanity relating to technological innovation is that we never (as a collective) ask if something should be developed, we just invent and develop new things ‘because we can’, without any question or forethought. AI, it seems to me, sits firmly in the ‘could do it, but shouldn’t’ camp. I often feel the world is full of very clever stupid people.
One thing the AI weather forecaster will use is the activity of cars windshield wipers to be able to tell within a foot of where it is actually raining. I suppose they could also surmise the intensity of the rain by how fast the wipers are going.
If you’re not familiar with the term ‘AI cannibalism’, let me break it down in brief: large language models (LLMs) like ChatGPT and Google Bard scrape the public internet for data to be used when generating responses. In recent months, a veritable boom in AI-generated content online - including an unwanted torrent of AI-authored novels on Kindle Unlimited - means that LLMs are increasingly likely to scoop up materials that were already produced by an AI when hunting through the web for information.
This runs the risk of creating a feedback loop, where AI models ‘learn’ from content that was itself AI-generated, resulting in a gradual decline in output coherence and quality. With numerous LLMs now available both to professionals and the wider public, the risk of AI cannibalism is becoming increasingly prevalent.
That’s interesting, and seems quite a positive development.Interesting to see the issue of "AI cannibalism" coming up as a threat to AI:
ChatGPT use declines as users complain about ‘dumber’ answers, and the reason might be AI’s biggest threat for the future
AI for the smart guy?www.techradar.com
This has already happened. Movie extras in crowd scenes have already been replaced by CGI models. Probably the next at risk are voice actors, both in video games and animation. Computer generated voices will displace human actors.Can they replace actors yet? No. But they will at some point in the future.
Yes I'd agree. My slightly glib 'no' was really meaning all actors including the leads. Interesting point about the stunt and body doubles as they are always careful not to show them too clearly anyway.Technology, whether using AI or not, will continue to eat away at more rote and repetitive tasks. It isn't going to make huge leaps to creating things like full length novels. Current computer technology is maxed out in developing uses like ChatGPT and incremental improvement in computer technology and costs is not going to provide a break through.
This has already happened. Movie extras in crowd scenes have already been replaced by CGI models. Probably the next at risk are voice actors, both in video games and animation. Computer generated voices will displace human actors.
Primary actors will likely become CGI and AI augmented. Things like aging and de-aging will become more common place. Make up artists will be at risk, when prosthetics for aliens, etc., become post-production additions. Stunt doubles and body doubles may be replaced by CGI models and deep fake images. I feel this technology is at hand and will allow for reduced production costs and thus be adopted.
This is stunningly naive of them. Do the signatories really believe that Russia and China would abide by that, rather than gain a 6 month lead in AI development?A similar open letter published last March urged for a six-month pause on the training of powerful AI systems, saying they “should be developed only once we are confident that their effects will be positive and their risks will be manageable.” Signed by more than 1,000 experts and executives – including Elon Musk and Apple co-founder Steve Wozniak – the statement warned that AI could pose “profound risks to society and humanity.”