A.I. (general thread for any AI-related topics)

B.
 
Apparently 10 percent of ChatGPT traffic is from students, based on usage reports showing a 10 percent drop from May to June. Schools block access to ChatGPT by blocking it on school owned devices and networks. Students using alternate devices and networks go right around the block. That leaves students who only have access to school owned devices and networks. Not off to a good start.

Rather than overtly taking on humans, maybe AI will just slowly replace human decision making. Like the Time Traveler story. Airlines in the US are having a lot of trouble scheduling flights because of interruptions by the weather. Besides disrupting the passenger plans, the interrupted flights also disrupt the air line personnel schedules, which makes it difficult for the air lines to keep track of everything. In order to get the scheduling and flight disruptions under control one airline is going to use AI to read all the weather reports including real time reports to schedule flight routes, arrival times, and departure times down to the second so as to avoid unexpected schedule changes made by the weather.

One thing the AI weather forecaster will use is the activity of cars windshield wipers to be able to tell within a foot of where it is actually raining. I suppose they could also surmise the intensity of the rain by how fast the wipers are going. So AI is watching what goes on inside of cars. Probably knows the radio stations, play lists, who you contact when driving, etc. etc. etc.

All the little decisions that require a lot of serious thinking are pushed (pushing, pushing, pushing) towards AI, leaving people thinking grander thoughts, which are perhaps only just illusions of how much they have accomplished with the touch of a button. Babies are born now knowing how to handle virtual buttons only seen on flat dimensionless screens.

Some older people people who lived before phones kept people in constant contact believe that 1984 has already happened, that the losses caused by technology are not worth the mythical gains. Going to the Moon or Mars is just rich folk running away from the reality of their lives.

Back to school. I read the thread about putting a P K Dick short story into electronic analysis. Most of the negative comments are why I read his stories. The machine did say that the writing showed great promise. I put some of my work in for analysis and got similar comments, but no mention of my work showing any kind of promise. I was impressed by the way the material was dissected, where all the short comings and lack of details were pointed out. I could see why students would appreciate using something like that to help them write up their homework. Being able to tell a student they are missing details is one way of grading work. That's a definite grade booster.

There are 3 ways AI/student interaction can go, besides the socio economic divide. You can learn from your writing shortcomings and write better responses without being prompted by a machine. Or you can continue to let the machine point out the mistakes and just fix them. Or, you learn a little, perhaps as little as just operating a program, but not everything, like in who cares. The same goes for GPS, writing computer programs, anything requiring computerized interaction.

There can't be AI without computers. Computers have been a part of life for over 50 years now. Maybe its just computerized assisted living, like the old folks have. The computers were always supplying the answers, it just took a long time to implement them due to a lack of a physical connection. Now the decisions can be implemented immediately, and its only the instanteous response from a large number of inputs that makes the machine look smart. Humans are easy.
 
The greatest failure of humanity relating to technological innovation is that we never (as a collective) ask if something should be developed, we just invent and develop new things ‘because we can’, without any question or forethought. AI, it seems to me, sits firmly in the ‘could do it, but shouldn’t’ camp. I often feel the world is full of very clever stupid people.
 
The greatest failure of humanity relating to technological innovation is that we never (as a collective) ask if something should be developed, we just invent and develop new things ‘because we can’, without any question or forethought. AI, it seems to me, sits firmly in the ‘could do it, but shouldn’t’ camp. I often feel the world is full of very clever stupid people.
I'm afraid that will ever be the case. If something can be done and there's gain to be made from it, it will be done. And there is little point trying to proscribe such development as it will only move to a part of the world that has not proscribed it and there will always be such places. Cloning humans, for example; as soon as it can be done it will be done no matter how hard some will try to stop it. It's not a matter of moral approval, I don't like the idea, but there will always be people who figure morality doesn't apply to them.
 
One thing the AI weather forecaster will use is the activity of cars windshield wipers to be able to tell within a foot of where it is actually raining. I suppose they could also surmise the intensity of the rain by how fast the wipers are going.

That's why I like to spray my windshield and run my wipers at random times -- to mess with the machine.
 
Amazon has a big problem as AI-generated books flood Kindle Unlimited
"Recently, an indie author, Caitlyn Lynch, tweeted about noticing that only 19 of the best sellers in the Teen & Young Adult Contemporary Romance eBooks top 100 chart on Amazon were real, legit books."

The AI books were cleared out, but it apparently will have to be cleared out on a daily basis. Amazon does not require authors to state if a book is AI complete, or AI assisted. Its not known if Amazon is up to the task commitment wise. This was only 1 category. While an AI book can't be copyrighted Amazon has their own copyright system which could be used by the HIs, human imitators.

Did amazon check every best sellers list they offer? Are they still checking them. Or do they simply attend to those who take the time to protest.

AI cannibalism. If a book was known to be AI generated it could be plagiarized by humans or by other AIs with no penalties. Possibly AI search results are already being dumbed down because of AI generated content being added to the AI collected search results. There is also the theory that AI search results are being dumbed down for monetary reasons. Either it is not desirable to provide free money making services to the general public or a subscription service is what comes next.

I have noticed that the quality of Bing's results are not as good as they used to be as they become more narrow minded. Bing is also much quicker to tell you to ask a different topic than it used to be. That could be a simple way of limiting a service that is unable to meet all its requests in a timely manner. It is annoying when it is giving out wrong answers and instead tells you to move on.

Charging for intelligent search results might get the attention of the FTC, which is already investigating ChapGPT, because it would be using others people's work without any kind of compensation. When AI searches the web for training purposes the final result is not plainly visible. But when a pay to use search engine displays unadulterated search results, it can be clearly seen from where the material is coming from.

In an effort to make an end run around using other people's work, a few new search engines are using the excuse of offering ad free search results as a way for charging for search results. One of these sites says it is paying news subscription sites their fees for using their feeds. These sites are able to exist because they are using Google and Bing results for free. Stealing from those who steal from everyone in order to make a living. Must be nice.

The war between Apple and Google has drawn battle lines in electric vehicles. New General Motors EVs you get google with a collection of pay to use services to provide the services the Apple CarPlay provided for free. This opens up a job market for 3rd party applications. No matter how hard the internet gets restrained the more opportunities arise to get around the bottleneck. Kind of like Jurrasic Park.
 
Interesting to see the issue of "AI cannibalism" coming up as a threat to AI:

If you’re not familiar with the term ‘AI cannibalism’, let me break it down in brief: large language models (LLMs) like ChatGPT and Google Bard scrape the public internet for data to be used when generating responses. In recent months, a veritable boom in AI-generated content online - including an unwanted torrent of AI-authored novels on Kindle Unlimited - means that LLMs are increasingly likely to scoop up materials that were already produced by an AI when hunting through the web for information.

This runs the risk of creating a feedback loop, where AI models ‘learn’ from content that was itself AI-generated, resulting in a gradual decline in output coherence and quality. With numerous LLMs now available both to professionals and the wider public, the risk of AI cannibalism is becoming increasingly prevalent.
 
The current AI can not write a novel. And in its current form never will. At least one that anyone would want to read. That’s not to say it’s not useful to help come up with ideas. Of course, many will say “but there are already novels that have been written by AI” In my opinion this is marketing hype. These novels have, I bet, either been so heavily edited that it would have been just as easy to not employ AI in the creation of any prose or so bad as to be unreadable. AI can write letters and short bios and such. And possibly even non fiction books(but still with lots of input; editing arranging by a person) over the last month I’ve used paid subscriptions to dedicated AI writing platforms to try and wrangle even relatively short passages of what I think is quality prose. I have realised the very big limitations of AI I it’s current form.
 
I think there are some pretty unrealistic expectations of current AI. When cars first started appearing in the late C19 they barely performed as effectively as a well handled carriage and horses yet look at how far they've come today. The same goes for computers. And couple that with our typical inability to spot the real game changers in any new tech. In Blish's Cities in Flight the computers could talk to you but they still navigated with slide rules, Asimov had robots with positronic brains that could equal or better human ones but he still had chemical photography. AI is still in the equivalent of late C19 car evolution. So of course it can't yet do all those things SF has loved to attribute to them. But they are evolving fast and I hazard to suggest that we still are probably unaware of what their biggest impacts might be and they are quite likely to not be whatever we expect.

So here's my two penny worth:
Can they write novels and screenplays yet? No. But they will at some point in the future.
Can they replace actors yet? No. But they will at some point in the future.
Can they govern human populations yet? No. But they will at some point in the future.

I could go on ad infinitum but, hopefully, you get my point. I'm pretty confident of those predictions, just not what the timescales will be. I'm also pretty confident that many of the biggest or most significant impacts have yet to be determined and are maybe unknowable at this point in time. Like, maybe, as some SF authors have speculated, some sort of FTL travel technology in the future may only be possible with AIs to control it.

I would also add that, for the specific case of novel writing, we Chronners are probably, rather counter intuitively, not the best judges of that. Hundreds of thousands of people loved 50 Shades of Grey, but was it well written? A vast number of book readers, and I'd hazard probably the majority of them, are not as discerning as the average Chronner, and seem to lap up the type of formulaic writing that AIs are likely to be good at comparatively soon. No disrespect intended to anyone on these pages but just take a look at the plots and characters dominant in books like Mills and Boon romances.
 
There seems to be a fundamental misunderstanding about what the "AI" that we have today IS, and what it IS NOT.

ChatGPT is chatbot (a program designed to mimic human conversation) that uses a large language model (a giant model of probabilities of what words will appear and in what order).

It is crucial to note, however, what the data is that is being collected and refined in the training system here: it is purely information about how words appear in relation to each other. That is, how often words occur together, how closely, in what relative positions and so on. It is not, as we do, storing definitions or associations between those words and their real world referents, nor is it storing a perfect copy of the training material for future reference. ChatGPT does not sit atop a great library it can peer through at will; it has read every book in the library once and distilled the statistical relationships between the words in that library and then burned the library.

ChatGPT does not understand the logical correlations of these words or the actual things that the words (as symbols) signify (their ‘referents’). It does not know that water makes you wet, only that ‘water’ and ‘wet’ tend to appear together and humans sometimes say ‘water makes you wet’ (in that order) for reasons it does not and cannot understand.

In that sense, ChatGPT’s greatest limitation is that it doesn’t know anything about anything; it isn’t storing definitions of words or a sense of their meanings or connections to real world objects or facts to reference about them. ChatGPT is, in fact, incapable of knowing anything at all. The assumption so many people make is that when they ask ChatGPT a question, it ‘researches’ the answer the way we would, perhaps by checking Wikipedia for the relevant information. But ChatGPT doesn’t have ‘information’ in this sense; it has no discrete facts. To put it one way, ChatGPT does not and cannot know that “World War I started in 1914.” What it does know is that “World War I” “1914” and “start” (and its synonyms) tend to appear together in its training material, so when you ask, “when did WWI start?” it can give that answer. But it can also give absolutely nonsensical or blatantly wrong answers with exactly the same kind of confidence because the language model has no space for knowledge as we understand it; it merely has a model of the statistical relationships between how words appear in its training material.

All of that is important to understand what ChatGPT is doing when you tell it to, say, write an essay. It is not considering the topic, looking up references, thinking up the best answer and then mobilizing evidence for that answer. Instead it is taking a great big pile of words, picking out the words which are most likely to be related to the prompt and putting those words together in the order-relationships (but not necessarily the logical relationships) that they most often have, modified by the training process it has gone through to produce ‘better’ results.


In effect, artificial intelligence is exactly what we all think about when we encounter Natural Stupidity. AI is the person you meet that simply regurgitates what they heard on the internet without understanding what those words mean or having any deeper understanding of why those claims are being repeated.
 
Technology, whether using AI or not, will continue to eat away at more rote and repetitive tasks. It isn't going to make huge leaps to creating things like full length novels. Current computer technology is maxed out in developing uses like ChatGPT and incremental improvement in computer technology and costs is not going to provide a break through.

Can they replace actors yet? No. But they will at some point in the future.
This has already happened. Movie extras in crowd scenes have already been replaced by CGI models. Probably the next at risk are voice actors, both in video games and animation. Computer generated voices will displace human actors.

Primary actors will likely become CGI and AI augmented. Things like aging and de-aging will become more common place. Make up artists will be at risk, when prosthetics for aliens, etc., become post-production additions. Stunt doubles and body doubles may be replaced by CGI models and deep fake images. I feel this technology is at hand and will allow for reduced production costs and thus be adopted.
 
Technology, whether using AI or not, will continue to eat away at more rote and repetitive tasks. It isn't going to make huge leaps to creating things like full length novels. Current computer technology is maxed out in developing uses like ChatGPT and incremental improvement in computer technology and costs is not going to provide a break through.


This has already happened. Movie extras in crowd scenes have already been replaced by CGI models. Probably the next at risk are voice actors, both in video games and animation. Computer generated voices will displace human actors.

Primary actors will likely become CGI and AI augmented. Things like aging and de-aging will become more common place. Make up artists will be at risk, when prosthetics for aliens, etc., become post-production additions. Stunt doubles and body doubles may be replaced by CGI models and deep fake images. I feel this technology is at hand and will allow for reduced production costs and thus be adopted.
Yes I'd agree. My slightly glib 'no' was really meaning all actors including the leads. Interesting point about the stunt and body doubles as they are always careful not to show them too clearly anyway.
 
It is official, ChatGPT isn't performing as well as it originally was. No specific reason has been discovered. Its handlers claim that each new version is better than the last one. Perhaps it is victim of garbage in garbage out computer syndrome. A database with corrupt data produces corrupt results. The education of the original ChatGPT program was probably hand picked data. Everything was preselected, even the nonsensical. This could mean that people gave it a push in a direction that was influenced by people seeking truthful answers, which could have made the original data a practical template for the resulting answers to questions it was asked. As time went on, ChatGPT spent more time collecting its own data in its search for questions people had and maybe just for the heck of it, just search through whatever was out there. By using the unfiltered internet, ChatGPT could have eaten a lot of garbage that was created by humans with little or no concern for accuracy. This could suggest that data manufactured for the express purpose of shaping personal opinions is harmful to systems that try to use the internet to make accurate, logical conclusions about queries not related to the original garbage. It might even affect people. Has data become something that has a physical presence in the real world, such as air or water.
 
I warned you about AI – Terminator director
The Hollywood filmmaker has warned of a “nuclear arms race” in artificial intelligence


Canadian film director James Cameron has said his 1984 sci-fi blockbuster ‘The Terminator’ should have served as a warning about the dangers of AI, and that the “weaponization” of the emerging tech could have disastrous consequences.

Speaking to CTV News for an interview on Tuesday, the award-winning director was asked whether he believes artificial intelligence could someday bring “the extinction of humanity,” a fear raised even by some industry leaders.

“I absolutely share their concern. I warned you guys in 1984 and you didn't listen,” he said, referring to his film ‘The Terminator,’ which revolves around a cybernetic assassin created by an intelligent supercomputer known as Skynet.
Israel using AI in airstrikes – Bloomberg

Cameron went on to warn that “the weaponization of AI is the biggest danger,” adding “I think that we will get into the equivalent of a nuclear arms race with AI. And if we don't build it, the other guys are for sure going to build it, and so then it'll escalate.”

In the event artificial intelligence is deployed on the battlefield, computers may move so quickly that “humans can no longer intercede,” the director said, arguing such technology would leave no time for peace talks or an armistice.

“When you’re dealing with a potential of it escalating to nuclear warfare, deescalation is the name of the game. Having that pause, that time out – but will they do that? The AIs will not,” Cameron continued.

The famed filmmaker has issued similar warnings in the past, saying that while AI “can be pretty great,” it could also “literally be the end of the world.” He even went as far as to say that “an AI could have taken over the world and already be manipulating it but we just don't know,” speculating that sentient computers “would have total control over all the media and everything.”


Though the tech has yet to achieve world domination, some leading experts have also sounded alarms about artificial intelligence. Earlier this year, AI giants including OpenAI and Google’s DeepMind joined academics, lawmakers and entrepreneurs to issue a statement calling to mitigate “the risk of extinction from AI,” saying it should be “a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

A similar open letter published last March urged for a six-month pause on the training of powerful AI systems, saying they “should be developed only once we are confident that their effects will be positive and their risks will be manageable.” Signed by more than 1,000 experts and executives – including Elon Musk and Apple co-founder Steve Wozniak – the statement warned that AI could pose “profound risks to society and humanity.”
 
I'm having a hard time following the arguments over Ai in the military and what the concerns are. What does computers in the battlefield moving too fast for human intervention mean? What are the computers attached to? Are these automated tanks or flying drones? How is AI being utilized and what makes it significantly different from algorithm based items? Why would there be an escalation to nuclear warfare?

To me, the biggest obstacle for uses beyond individual targeting automation is the lack of training data. I see AI aiding in tracking an identified target without human interaction after the target has been identified by a person. Instead of a soldier needing to sit out in the open holding a laser on a target, the soldier can blip the laser to locate the target and then run away before any counterattack. Likewise, defensive targeting of incoming missiles and such could be done much better by a machine learning derived algorithm than by one coded by hand. These are merely extensions of what is currently used.

The biggest obstacle in adapting AI to warfare is the lack of available training data. There is no data on large scale missile attacks, offensive or defensive, much less data on nuclear attacks. I don't see any Terminator or War Games or Robocop scenarios in our future.
 
A similar open letter published last March urged for a six-month pause on the training of powerful AI systems, saying they “should be developed only once we are confident that their effects will be positive and their risks will be manageable.” Signed by more than 1,000 experts and executives – including Elon Musk and Apple co-founder Steve Wozniak – the statement warned that AI could pose “profound risks to society and humanity.”
This is stunningly naive of them. Do the signatories really believe that Russia and China would abide by that, rather than gain a 6 month lead in AI development?
 

Similar threads


Back
Top