A.I. (general thread for any AI-related topics)

Imagine a real time rendered virtual reality game that you can explore endlessly... Whole universes from simple (spoken) prompts. It's coming.
The next version of No Man's Sky which currently has unlimited procedurally generated planets "to explore."

The game might generate unlimited "NPC Personalities" with an AI Chat generator so that you could endlessly communicate with these generators in a themed visual environment on your screen.

OOOOH with 3d immersive goggles sold with a combined overstuffed recliner / toilet so that you can live the rest of your life immobile like the "Humans" of Wall-E, but without actual human interaction.
1740162150564.png
 
Angela Collier - Physicist offers this video

AI does not exist but it will ruin everything anyway​


I find this to be a poor logical argument, namely, the presenter says, despite the commonly used definition of AI, I will choose my own, personal definition and declare everyone else wrong. (Unfortunately, I see this argument used quite a bit in multiple discussions.)

Probably the most well-known test for AI is the Turning test and, I would argue, that milestone has been passed. Computer programs, of various types, do exceed the capabilities of human beings for various applications, just as mechanical tools exceed the capabilities of humans in many physical applications.

Yes, AI exists. It merely is not as awe inspiring or fearful as some would imagine.
 
I would have to disagree. There is no intelligence at work here; simply pattern matching and recycling stuff that matches the pattern. This is not the definition of intelligence which is generally something along the line of "the ability to acquire and apply knowledge and skills." Whereas what we currently have is "the ability to acquire and apply information (without actually having any knowledge of what the information is)." Which I would argue is something very different. The difficulty is that the current common definition of AI is actually more of a marketing slogan than anything else.
 
I find this to be a poor logical argument, namely, the presenter says, despite the commonly used definition of AI, I will choose my own, personal definition and declare everyone else wrong. (Unfortunately, I see this argument used quite a bit in multiple discussions.)

Probably the most well-known test for AI is the Turning test and, I would argue, that milestone has been passed. Computer programs, of various types, do exceed the capabilities of human beings for various applications, just as mechanical tools exceed the capabilities of humans in many physical applications.

Yes, AI exists. It merely is not as awe inspiring or fearful as some would imagine.
This comes down to a marketing argument.
"Almond Milk" is as much milk as anything produced by any mammal because hot-shot marketing departments say it is. We might also note that several US States have banned the use of the word "Milk" to describe milk from cows if it has not been pasteurized.

The problem exists when people have an idea of the meaning of "AI" as part of a cultural discussion that has gone on for decades. What is currently being marketed as "AI" is nothing of the sort.

The question is whether words and concept phrases have meaning beyond what the hottest Marketing minds can twist meaning into them. And you've come solidly down on the side of "Almond Milk" -- for that you are not alone.
 
This comes down to a marketing argument.
"Almond Milk" is as much milk as anything produced by any mammal because hot-shot marketing departments say it is. We might also note that several US States have banned the use of the word "Milk" to describe milk from cows if it has not been pasteurized.

The problem exists when people have an idea of the meaning of "AI" as part of a cultural discussion that has gone on for decades. What is currently being marketed as "AI" is nothing of the sort.

The question is whether words and concept phrases have meaning beyond what the hottest Marketing minds can twist meaning into them. And you've come solidly down on the side of "Almond Milk" -- for that you are not alone.
I would really prefer to discuss AI rather than Almond Milk and I do not understand the logic of switching the focus of the conversation. I will repeat three assertions to try to maintain the focus of this discussion.

1) AI has been accepted via common usage. It may be a subjective definition, but, like it or not, there doesn't seem any room for debate that this has happened.

2) If one wants a somewhat more objective definition of whether a computer program can be considered AI, I suggest the Turing Test. There is certainly room to suggest alternatives and I would be interested in hearing those.

3) I extended my second point by asserting that, in my opinion, the Turing Test criteria has been met. Although I am unaware of any formal evaluations using the precise Turing Test, I feel that the general sense of the test has been achieved in that many results of AI programs can be easily confused with what might have been done by humans. This is also an area that could provide for interesting discussion.

How should we determine whether a computer program should be declared AI, via common usage, via some test, and would such test be currently met?
 
the Turing Test criteria has been met
It may pass the Turing test but ultimately it is just using a structured reply system that is simply random phrases that might have some kind of probability of turning up in a conversation. The words could be coming from a disoriented person, but its a response.

If that's what the Turing test was supposed to identify, it does.

In terms of continuity, it flunks out big time.

In terms of accuracy, it flunks big time.

In terms of reasoning, it's highly flawed and prone to simple mistakes.

In terms of understanding time and space, it flunks big time.

All of which describes how people can engage in conversations in real life.

AI is mimicking speech in such a way as to appear human, but it is done by cheating. It is simply looking through its vast collection of data at a very high rate of processing, for what statistically looks like what a person would say next in the conversation.

The Turing test only identifies what passes for human conversation, it can't identify original creation of human thoughts.

Create a lot of advertising slogans and keep repeating what's not true and people will start to believe it without even really knowing what they are talking about. There is a lot money and all kinds of power at stake. There are millions of menial but complex tasks that "AI" programs can do faster and often better than people can do. Exercising intelligence is not something current AI programs can do.

We won't know it is intelligent until it's output is genuinely intelligent. Just talking to it proves it's not intelligent. Perhaps testing it the same exact way we test students would give us a better idea of how "advanced" it is.

Ironically, people insisting machines do jobs that they can't do, is not a sign of intelligence.
 
I think this is a really difficult problem. Most, probably all, artists, musicians, writers etc, learn their trade by studying the works of artists, musicians, writers etc. that have gone before them. And that will mostly be from the internet, libraries, galleries, second hand books etc. IE. with no payment to the artists they are studying and learning from. So the largest difference is really whether the 'student' is human or 'AI'.

I'm not saying I approve of the current, or government proposed, situation but I am saying the problem is a little more nuanced than is sometimes presented.
 
Its not simple by any means. Erosion of rights has become a common place way of achieving progress. Technology is wielded like a battering ram. The valuation of AI is based on expectations of future capabilities, not on actual performance. In the US the AI companies are energy hogs and want to hook their power lines directly up to the electric power generating plants instead of operating off of the grid. None of this instills confidence. There is a sarcastic saying which goes along these lines, you can make up for losses by increasing the volume. That's the same as scaling up the equipment to correct deficiencies. Different equipment is needed, right now they are trying to drive a square peg into a round hole.
 
Its not simple by any means. Erosion of rights has become a common place way of achieving progress. Technology is wielded like a battering ram. The valuation of AI is based on expectations of future capabilities, not on actual performance. In the US the AI companies are energy hogs and want to hook their power lines directly up to the electric power generating plants instead of operating off of the grid. None of this instills confidence. There is a sarcastic saying which goes along these lines, you can make up for losses by increasing the volume. That's the same as scaling up the equipment to correct deficiencies. Different equipment is needed, right now they are trying to drive a square peg into a round hole.
Somehow this reminded me of an old Saturday Night Live skit:


Both the point about scaling and that I'm linking to a YouTube video that is almost certainly NOT paying royalties to NBC the owner of the copyrights for Saturday Night Live, and also that this site is almost certainly not paying royalties to any of the rights holders to any of the images or videos linked from this site, but it is very efficiently setup to share those works. Maybe this comment makes that video a whole new work...
 
Ida Lovelace was the first person to recognize that analytical engines could perform universal computation. Not having a physical machine to test her ideas on, she had to do everything with thought experiments. Using math expressions it was very easy for her to show what the engine was capable of doing. What she wrote is still true today. It is ironic that the work of Babbage and Lovelace were lost for a hundred years. Much of what they had discovered had to be rediscovered when the first universal computers were being built in the 1940's.

She had a definite opinion on the ability of a analytical engine's ability to create something original, it couldn't.
Alan Turing read her Notes, and coined the term “Lady Lovelace’s Objection” (“an AI can’t originate anything”) in his 1950 Turing Test paper. He got around this by saying if the machine's response could surprise a human being than it has passed the intelligence test. Engaging in conversation was the easiest way of showing this.

Ida Lovelace's statement is usually presented this way.
“The Analytical Engine has no pretensions to originate anything. It can do whatever we know how to order it to perform.”
A more complete statement is here:
“The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis, but it has no power of anticipating any … relations or truths. Its province is to assist us in making available what we are already acquainted with.”

A test was created, called the Lovelace Test. It's based on recognition of creativity and it fails miserably to identify artificial intelligence. The results are Maybe with a million asterisks following it.

The conversation proof is dead on arrival as it is just pulling snippets of words out of a vast collection of randomly collected words and uses probability to pick the next word in a conversation. Having millions of conversations too mimic provides an apparent understanding of how to carry out a conversation but it isn't a sign of intelligence.

Some people have said that AI's ability to create visual art is a sign of intelligence. When it comes to painting it is just using cut and paste to create pictures. It's a form of collage where a single picture is the result, the seams of the various component elements being blended together. Photoshop does a good job of the that. A program that uses a large collection of images that can be randomly cut out and blended together using a common theme (make face from previous images of what has been labeled a face) so that means that creating art work is not a good test.

Creating music is another art form that AI is credited with doing but music can also be created all kinds of natural functions from wave motions, water falls and water pumps to sounds created by the wind blowing through, under, across an endless collection of surfaces, professional sounding pieces made by preprogramed synthesizers set to randomly pick what ever it is programed to create. So music is out.

Another part of the Lovelace test is if a programmer can not explain how the program arrived at the output it did. Google engineers have a poor understanding of how their programs pull data and make conclusions about what to show to people aside from the specific items they tell their programs to show. It's called the black box phenomenon. So the it can't be explained test is out.

One way to identify an intelligent person is if they can tell other people a "better" way of performing some action. Perhaps the day will come when these AI programs will be able to tell people a better way to accomplish what they are trying to do with real results. It's already being done with medical treatment by being able to see what human eyes can't see but does that count. Even that's a toss up. It will have to involve practices which people believe are the only way to do something and turn out to be completely wrong. It would be better still if we didn't ask the machine to tell us. Until then, we will probably never know if the machine is capable of making intelligent decisions.

We could always change the definition of intelligence so it no longer is something that only a living entity can do.
 
We could always change the definition of intelligence so it no longer is something that only a living entity can do.
I agree with pretty much everything you say in that post, except that last statement. There is nothing that says intelligence is something that only a living entity can do. Other than Linda Lovelace's comment about an, exceptional for it's time, but still very primitive analytical engine. Just because no artificial system has yet exhibited intelligence doesn't mean it won't happen sometime in the future.

I, personally, believe it is very possible but it may not be for a long time yet.
 
Let me provide a counterargument as to why I believe that computer programs have achieved intelligence, albeit one different from human intelligence.

Computer programs can now analyze and create things, in many scenarios, at a level meeting or exceeding human expert capabilities. This is offset because there are also (many fewer) scenarios where the computer program will fall quite short of human capabilities. Furthermore, this ability is not directly encoded by human developers, rather it is learned. We are now at the point of having algorithms that (essentially) create algorithms to perform tasks. This last point precludes any sort of intellectual analysis of what the program is capable of doing and it is impossible to reverse engineer a result to understand why it was produced.

The Turing Test is a blind test, meaning that one cannot look at the mechanism used to fulfil the test. To do otherwise creates logical loop. To deem a program as intelligent one requires it to use a mechanism that one has determined to be intelligent. Furthermore, the ability of a Large Language Model to parse written or spoken language and provide appropriate responses is not a capability programmed into it by humans; rather it is trained in how to interpret and reply.

Art created by AI programs is not "cut and paste." It is an incremental approach that generates appropriate micro details and builds those up to create a macro response of varying degrees of correctness. This is in comparison with the more human approach of defining the macro level target and creating the micro details with varying degrees of correctness.

I am not willing to equate "music" with the sounds of "wave motions, water falls and water pumps" nor does an AI program "randomly pick what ever it is programed to create." Again, AI generated music uses a learned approach; I doubt anyone has ever generated an algorithm to directly create music. AI programs are trained on what constitutes music (including lyrics) and generates songs in a manner that defies rigorous analysis.

I'm unsure as to what the "it can't be explained test" is nor why it is "out."

I find very few humans can devise a better way of doing something and that very few can explain how they did something. Ask a musician, a painter, a writer how the person came up with an item and one will largely be met with a blank stare. On the other hand, my GPS can readily identify a better route and tell me when I deviate from it, although I suspect that this is largely an algorithmic result sans AI.

Machine Learning techniques are quite different from algorithmic programming approaches used in the past and the results achieved are quite spectacular. AI programs can replicate many of the things done by animals that are readily defined as intelligent (dogs, rats, etc.), yet some don't wish to declare the programs as being intelligent. They certainly are not sentient and I don't see us progressing to Isaac Asimov's robots, but AI programs are meeting and exceeding the abilities of humans in many instances.
 
If machine intelligence is not the same as intelligence then maybe it could be called something else, like Predictive Behavior. That could make a good sales pitch, some buzz word that says machines that predict the future.

I think the term machine learning techniques is better phrased as machine training techniques. Training is not the same as learning. Learning happens after training is provided but it is not an automatic state of development. The question is does the process ever get past the training stage to a true learning stage. The "learning stage" of AI training enables the machine to predict future events. The learning stage of AI training works by implanting additional programing into the AI program which enables it to look smart.

It would be nice if a truly super intelligent machine that is light years ahead of people would be able to come up with concepts that have never been thought up before. But if AI is only rehashing what has already been done can it come up with original thoughts. There is the hope that AI will be able to discover things we haven't been able to see for ourselves and then come up with new conclusions. But if it was programed to find things and programed to draw up explanations, that seems more like programing at work.

Just because a machine can do something better than a person can, I don't think that's a sign of intelligence. Eye glasses don't make people smarter, they enable people to make use of their environment. Any visual data that can be zoomed in to micro dimensions a program, smart or not, can analyze much better than a person could ever hope to. Detecting anomalies in medical scans is an excellent example. It doesn't need to be smart to do this, it just needs good programming.

Its easy to picture. Blow up a 1 inch square image so that it is 30 feet by 30 feet. For a person to look at it closely they are going to need a scissor lift that goes up 30 feet and a lot of time. Or they can stand back and use binoculars, that will also take a lot time. Its quite possible a person wouldn't even know what they were looking at.

AI works really great, fantastic even, right up to the moment it runs off of a cliff, which could be after one word or a couple of pages if you are lucky. Anything that says 15 feet can be less than or greater than 20 feet is not intelligent, artificial or not. Its just guessing. It's terminally defective artificial intelligence at best, or possibly Almost Intelligent.

If the errors were few and far between that would be acceptable. At the current state of affairs it is only guessing. But it's not the kind of intelligence that people use to guess. Its just slight of hand. Its got a very big hat full of data to pull a rabbit out of.

No matter how you slice it, I don't think non text art is not a good way to determine if a program is intelligent or not. It's too easy to make.
the more human approach of defining the macro level target and creating the micro details with varying degrees of correctness
With digital creation of art, it's no problem for people to start at the micro level and develop on out from there.
 
We argue about artificial intelligence creating art but is art purely about intelligence? I’d argue that inspiration and imagination are just as important.

Most art is partly rules based - rule of thirds in imagery, scales etc in music and common tropes in writing. These are all things that can be learned by an AI but I’ve seen nothing so far to convince me of the use of artificial imagination or even a muse.

Is imagination and inspiration a side product of intelligence or is it something different? I genuinely don’t know the answer.
 
Last edited:
I got curious. What would happen if I just asked for a story or a picture. Would it want additional information. Nope. It just spit out a story and then a picture. Each time there was no hesitation.

When I asked where the prompts came from to make the picture, copilot said there was some sort of a mistake. It said it never made the picture. I downloaded the picture and uploaded the picture back into copilot and told copilot there's the picture you made.

It persisted with this basic response each time I tried to find out how it made the picture. Each time it said it didn't.
Then I said "I want to know where the picture came from"

"It seems there was a mix-up earlier. I didn’t generate any pictures in this conversation, nor did I reference any existing ones. The earlier message saying "Your picture is on the way" might have been a technical glitch or miscue on my part. However, if you have an image or a vision in mind, feel free to describe it, and I can help bring it to life for you! Let me know what you'd like to create."

I don't think the picture was created on a whim or drawn in element by element. I would say it is a cut and paste effort, all the lines smoothed in, nothing more. The sky, the mountains, the plants, the tree, the frame, the plant, just grabbed form somewhere and put together as a single image.

a picture 600x600.jpg
 
...but AI programs are meeting and exceeding the abilities of humans in many instances.

So do sewing machines, lawn mowers and vacuum cleaners. Machines that come without chips. Excelling at something has little to do with intelligence. Intelligence is inventing a machine that excels in performing tasks for you, in a better, faster and more consistent way you yourself ever could.
Chips are machines. Their analytical abilities are phenomenal. Processing data by blindingly fast comparing bit by bit. Humans would find such tasks mind-numbingly boring and tiring. We would quickly lose our concentration. It does not require intelligence.
But then, what is intelligence? Being creative, imaginative, inventive? We hardly understand how are own minds work, where our ideas stem from. But it does require understanding what your doing, what you desire to achieve and work on until satisfactorily accomplished.
AI has no desires. It follows its programming, without understanding any of it. Nor can it deviate from or halts its process.

Let me provide a counterargument as to why I believe that computer programs have achieved intelligence, albeit one different from human intelligence.

We are now at the point of having algorithms that (essentially) create algorithms to perform tasks. This last point precludes any sort of intellectual analysis of what the program is capable of doing and it is impossible to reverse engineer a result to understand why it was produced.
I am not sure what you are saying here. Algorithms that create algorithms? Of its own? Purely of its own? For purposes only AI itself knows and understands and nobody understands how or why (and is OK with it)? Seriously?
If this were true it is time to shutdown AI or remove the ability to rewrite its own code from its programing. Not because AI can become a threat, but because I seriously doubt it understands what it is doing. The result might be faulty in unpredictable ways. Sure, it can remove errors from code, short routines. I use ChatGPT myself when I get stuck with Python (a fairly new language to me). But it can also make a mess of it. Writing code without a purpose, a dedicated goal, an understanding of what it that the algorithm should do is unlikely to result in code conform you were aiming it. If there was an clear aim to begin with.
If AI were intelligent it should stop what it is doing and concentrate on analyzing zillion chunks of data.


Just my 2 cents.
 

Similar threads


Back
Top