A.I. (general thread for any AI-related topics)

what makes AI dangerous ... [is] the ability for the program to rewrite its own programing so that it can decide what its own goals are.
The current level of AI, however, does not have the ability to rewrite its programming. Machine Learning techniques feed the system a large amount of data and give the machine the expected result. That is the training phase. Most no longer continue any learning after going live (though some do). The problematic areas for Machine Learning are that it is unpredictable what the machine may do in low likelihood scenarios or in cases of bad incoming data, such as a failed sensor.
 
The current level of AI, however, does not have the ability to rewrite its programming.
Yet..... :D

Although it's probably incorrect to think of rewriting its programming. Its programming is how it 'thinks,' what it thinks is down to its training. This is not so different to life in many ways. We don't rewrite our programming; how we actually think, that is the physical and biological process of thought, is, as far as I'm aware, pretty much fixed. However we learn new stuff, experience new stuff and on the basis of that we change our thoughts and our decisions. We are still thinking in the same way - our programming has not changed - but now we are thinking different things based on our 'training,' our life experiences.

I don't think AI is necessarily so very far away from that. And maybe eventually it will be able to rewrite its programming (something we can't do) and make itself more efficient, complex or whatever.
 
Last edited:
I was checking out replies to a video on the Bullitt car chase and (as petrol heads will realise), this is what happens when you over automate. :)

1715160600994.png
 
The current level of AI

I don't think AI is necessarily so very far away from that.
Sorry for taking those brief remarks out of context, but they kind of illustrate my view of AI in 2024, which (arguably rather cynically) amounts to this:
AI - in the truest sense of the word - doesn't actually exist
Contentious, perhaps, but from my experience, it seems that:
"AI" is nothing more than a term marketing people use as they know you can add x% onto whatever you're charging for the service/system:rolleyes:

There are a couple of relevant (hopefully!) points around this that may make this a bit clearer:
1. Artifical Intelligence - the Turing-esque definition - is a comprehensive system capable of complex, independent thought
It's a broad definition, but loosely (in my warped mind, at least:rolleyes:) is:
  • Capable of advanced thought and reasoning across multiple dimensions/domains
  • May be as good (if not better) at such things than your average human
Sure, you can code/develop something to fool a Turing test. It's been done before. Take the Deep Blue/Kasparov scenario, for example:
  • There are a finite (albeit rather large) combination of moves possible at any point in a chess match
  • Run simulations, calculate more steps ahead than a Grandmaster, and brute force the problem, and it's possible to work the probabilities so that - hours later, when the Human tires - the machine wins by virtue of consistent performance (i.e. unaffected by tiredness, cognitive effort/strain, and doesn't need to relax/sleep)
  • However... if you ask Deep Blue what it thinks about, say, US politics, or who'll win the Premier League next season, will that response be more or less realistic than if you asked Garry Kasparov?
It's a very narrow use-case, is what I'm saying - far from a holistic system capable of advanced thought and decision.

and...


2. Working with ChatGPT/CoPilot in a Dev Environment...

In my last job, my boss wanted to see if tools like ChatGPT and CoPilot were any good, and whether they could make things easier for us at work, so we did some fooling around in a safe dev environment where we could test Python bulk-data-processing scripts (volumes of data that exceeded max limits of CSV files by a considerable amount:eek: and required some serious batch processing.

  • We started by feeding functions and asking for docstrings as output
    • The first problem was that ChatGPT wasn't up to the latest version of python, and so often failed to interpret the code properly
    • It failed so often that it was virtually useless because you had to check every output (which took enough time it was quicker to code by hand)
  • We next tried getting the AI to write compact functions by giving it clear specifications in simple, clear English
    • It failed. Pretty much every time. Sometimes it looked like it might work, but digging deeper, the code was fundamentally flawed
  • I/We queried the AI on how to code x or y - what's the python code for doing this or that.
    • It was garbage. I've told the AI it's wrong 3+ times in a row for a very simple single-purpose function many times. It says it understands, says its made a mistake, then tells you it's learned and this time the code is absolutely, definitely right:rolleyes:
      • It sounds believable - that's the danger - but is still useless
So my experience has been that it's far, far quicker to google something or look up on Stack Overflow than even try using ChatGPT/CoPilot. But it looks/sounds incredibly convincing - even to veteran, cynical developers:rolleyes:. So, if the IT nerds can't quite be sure that AI works, how can a normal person (i.e. someone not kept in a darkened basement:)) be expected to tell whether what the answers to any questions they ask are correct or even close to the true/correct answer?:unsure:



True, my own tests weren't perfect, were biased, and were themselves fairly narrow use-cases, but...
In my view, they demonstrate that we're not working with true AI, just (relatively) sophisticated computer programs that substitute creative/original thought for brute-forcing problems mathematically. We mostly (eventually) learn from mistakes and try something different, but AI doesn't seem able to do even that unless you tell it to (and not always then even).

And one final point...
Remember Blade Runner? The Voigt-Kampff test is akin to the Turing Test. Replicants are AIs inside a body, right?
And despite all the tech, the only thing that can make that distinction between human/replicant is Deckard (Harrison Ford).:unsure:

I remember the Ridley Scott film (late 70s? Early 80s?) and it was ahead of its time back then. But that, in turn, was based on Philip K. Dick's story Do Androids Dream of Electric Sheep? which - I think - was cutting edge when that was published (maybe late 50s, early 60s?).

So, the concept's more than half a century old, right? And I'm not sure we've got that far, let alone further.:unsure::eek:

That being said, I'm an idiot: yesterday I tried writing leet, and got the 1337 back to front, so humans still are far from perfect:LOL::oops:
 
From Scientific American, "How New Science Fiction Could Help Us Improve AI...
Recognizing the influence that popular narratives have on our collective perceptions, a growing number of AI and computer science experts now want to harness fiction to help imagine futures in which algorithms don’t destroy the planet."

Fictional accounts could indeed give AI a nicer image.

"Some researchers recently sought to determine whether a text-generating AI could be coached to deliver human-quality prose. As their preprint results found, stories composed by an AI that was given crude prompts fell flat, but more elegant and creatively refined prompts led to more literary prose. This suggests that what we give to a generative AI is returned to us."

Wonder if they tried to create a science fiction story that would show AI in a more positive environment.

They could be saying that if users treat AI with more respect, (or is that respect for the AI creators?) AI will be a beneficial tool to use as it reflects that respect. A more likely prospect is that the more creative work people do themselves, before their work is automated, the better the results will be for everyone.
 
I think it's a bad mistake to always think AI is attempting to emulate us and therefore say it doesn't exist because it can't yet do so. I have several problems with this. The simplest is, is there some sort of threshold above which it suddenly becomes 'intelligence?' Or, more likely, are there degrees of intelligence? Humans are more intelligent than dogs (mostly) but that doesn't mean dogs don't have some intelligence. Dogs are more intelligent that snails (I think!!!) but does that mean snails don't have any intelligence? Secondly, why should AI try to emulate us, or at least the 'average' human, why not make it more akin to the rarer savant; not necessarily very intelligent generally but exceptional at certain things? If that means it's not AI then maybe we simply need a different name! Thirdly people always seem to focus on the intelligence bit and think in terms of the Turing tests etc. but somewhere along the line they forget the word 'artificial.' So long as that word sits there then surely any kind of comparison with any known 'natural' intelligence is meaningless.

AI will develop to solve problems we humans need to have solved. Along the way it's going to get ever smarter but does that automatically mean it has failed if it hasn't reached some arbitrary definition of intelligence we may have come up with? AI will develop to be whatever we need it to be, or, more accurately, we will develop AI to be whatever we need it to be. Does that mean it will develop some sort of sentience along the way? Who knows and does it matter? That is something we cannot know (yet) but it is also not something I believe any researcher is actually attempting to achieve. They are just trying to develop systems that will help us solve ever more complex problems.
 
Humans confuse physical traits with "intelligence." This apparently appears to elevate humans above all other species in terms of capabilities. Language is a highly prized ability with both mental and physical traits. Human language is mostly shaped by physical activities, many of which are geared toward making the "best" use of a situation. In that respect, taking advantage of the most number of things, humans are seemingly far ahead of species. Intelligence is measured in terms of human activity which is driven by some very peculiar ideas which are based on emotions and not on logic.

There is no one single defining whatever that defines intelligence but is rather a series of components that taken together could be used to measure intelligence. Using a system like that, humans could be seen to be intelligent by some components and not so intelligent based on other components. Non human life scores extremely high in some of the components humans fail out.

Among other things AI will be used to solve problems but the types of problems being solved, or handled, will include taking advantage of people, separating people from their money, programing people to follow routines that benefit less people rather than more people, etc.

It would be amusing if AI units were set up with an impartial set of rules for observing events, collating common points, that would read about human activities via the web, then create "original" articles that incorporated information from those activities without any human guidance, and then posted those articles on the internet for everyone to read as if they were written by people.
 
I would have thought that a fundamental attribute of a real AI would be that it could think and not just process. It wouldn't have to be conscious, but it would have to "know" what it was doing at some level.

It seems to me that LLMs do not know what they are doing, which is why they can produce output that is clearly (to us, but not to them) complete nonsense that can bear no resemblance at all to anything that they might have found about the subject of their search (see here).
 
I would have thought that a fundamental attribute of a real AI would be that it could think and not just process. It wouldn't have to be conscious, but it would have to "know" what it was doing at some level.

It seems to me that LLMs do not know what they are doing, which is why they can produce output that is clearly (to us, but not to them) complete nonsense that can bear no resemblance at all to anything that they might have found about the subject of their search (see here).
I read a "song meanings" post (about the song "Five O'Clock Whistle") online and was surprised to see that the interpretation had nothing to do with the lyrics' obvious meaning... after a while I suspected that the post may have been chatbot generated so I went to Open AI to see what ChatGPT said. The response was almost identical, it had no idea about the meaning of the song. I asked it about "Big Yellow Taxi"... again, it was unable to explain what the song is about. I tried coaching it to give a different answer, waste of time. The LLMs are just predictive text with bells on. It's Artificial Stupidity
 
I read a "song meanings" post (about the song "Five O'Clock Whistle") online and was surprised to see that the interpretation had nothing to do with the lyrics' obvious meaning... after a while I suspected that the post may have been chatbot generated so I went to Open AI to see what ChatGPT said. The response was almost identical, it had no idea about the meaning of the song. I asked it about "Big Yellow Taxi"... again, it was unable to explain what the song is about. I tried coaching it to give a different answer, waste of time. The LLMs are just predictive text with bells on. It's Artificial Stupidity
... "a finger pointing at the moon..." the chatbot can tell you a lot about the finger!
 
Secondly, why should AI try to emulate us, or at least the 'average' human, why not make it more akin to the rarer savant; not necessarily very intelligent generally but exceptional at certain things? If that means it's not AI then maybe we simply need a different name! Thirdly people always seem to focus on the intelligence bit and think in terms of the Turing tests etc. but somewhere along the line they forget the word 'artificial.' So long as that word sits there then surely any kind of comparison with any known 'natural' intelligence is meaningless.

AI will develop to solve problems we humans need to have solved. Along the way it's going to get ever smarter but does that automatically mean it has failed if it hasn't reached some arbitrary definition of intelligence we may have come up with? AI will develop to be whatever we need it to be, or, more accurately, we will develop AI to be whatever we need it to be. Does that mean it will develop some sort of sentience along the way? Who knows and does it matter? That is something we cannot know (yet) but it is also not something I believe any researcher is actually attempting to achieve. They are just trying to develop systems that will help us solve ever more complex problems.
Exactly my stance in my The Autist. And very true about the expert systems bit. Some researchers (eg. Mark Solms) are trying to set out the conditions and parameters for artificial consciousness.
 
People have been having fun with Google's experimental AI Overview...

AI can’t tell if something is a parody. The one about eating rocks comes from The Onion, and the Beatles one has been around for several years.

This video has had 2.6 million views

 
A program is a program, as good as the coding is. The coders are simply doing a job. The people directing the coders might be like the movie directors who never watch other peoples movies. They were/are using social media to supply answers to questions. Overall, social media is a poor, unreliable source of information, even when true, it can be heavily biased. It was used because it is a very cheap source of information.

AI systems are simply data synthesizers making music. Even if everything put into them is true they can still mash up the wording so that the results are false. The more talented the composer, the better the output created by the data synthesizer. Having anybody off the street perform a job they have no real knowledge about will usually yield predictable results. The data industry seems to be always trying to substitute things that have to be paid for with free things that aren't nailed down. It started with anybody being allowed to be a critic/reviewer. It works and it doesn't work. People are always gaming the system, mixing garbage in with reality. For databases, which can represent anything, put garbage in with the good stuff, you get garbage out.

In the beginning, companies were simply listing search results as simple search results. It was nice, people put real things online, the good websites got good ratings. As the number of websites increased it became difficult to sort everything out. Along came Alta Vista, a search engine that could do the job quickly and efficiently. Alta Vista had 17 percent of the web search traffic, Google had 7 percent. Then someone got the idea to add shopping and email to the search feature. It was poorly executed and Alta vista steadily lost ground to Google who understood how to integrate everything and keep the search results flowing smoothly by analyzing the relationships between websites instead of just indexing the results.

Originally the websites were the results of well meaning people. Easy data to work with. Eventually schemers entered into the picture. Using persuasion instead of physical force to get their schemes accomplished. These schemes usually separate people from their money or whatever personal power they might have. People have been floating the idea of rating web sites, or even web data, as trustworthy or not, based on the accuracy of the data they contain. At this time, the only way to do this is to have people fact check the information. Ironically this would be the company's quality control department which hasn't existed in its original format for a very long time. People use "reviews" to prove their output is good, the bad review is the product of a user who doesn't know what they are talking about. The "quality" of the product has been replaced with the profits generated by the product. Plus quality control slows down production.

The size of the data quality control work force is mind boggling for companies that have been forever trying reduce their workforce overhead. It might actually necessitate the hiring back of all the workers who answered the phone, manned the desks, created paperwork, filed and retrieved the paper work, and customer service departments. That future must look very disappointing to companies like google who are trying to create a future entirely composed of automation, which is beginning to look like a very murky future.

Which company will step up, or will a new company materialize to provide the quality control necessary to make AI answered questions a viable, trustworthy, productive operation. We could continue to bumble along until the technology gets to the point where it can't be fooled by human beings. Or will AI be regulated by only using it where it can't make a public mockery of itself.
 

Similar threads


Back
Top