A.I. (general thread for any AI-related topics)

The problem is that AI is being trained to do all the wrong things.
I don't want AI to do my writing, music and art for me.
I want an AI to do my laundry, wash my dishes and do other chores so I have more time to do my own writing, music and art.
 
AI hasn't beaten us yet!
We have an AI enable email system. It is supposed to make us more productive.
Someone emailed to let me know that they had passed on some equipment to a new user...
The AI based replies were "I am so sorry to hear that", "That is terrible news" and "That is so sad"
We are still at the "Artificial Stupidity" phase.
 
How interesting. So I've been using Perchance, the site mentioned waaaaay earlier in this thread and have been building on it and building on it and eventually have this. ✨ Beautiful People ✨✧ free Text2Image AI ART ― Perchance Generator

So I have taken an approach in to ai of NO FEAR and SYMBIOSIS. That project is built on layers of projects before where I sifted a list of 800+ artists recognized in to lists of artists that would combine nicely on pics. beautiful-people is at a stage where it generates some really beautiful people with alot going on from my side and what I have done AND alot from the ai. So Perchance also allows ai-text and since I know node.js I made an ai that can be asked something and creates a blog post on the subject which then stays around. Would link if interested but i need to upgrade it. But key things I did: First and foremost WHO the ai is matters. I chose the most pure good people I could think of, which are... Princess Celestia and Princess Luna from My Little Pony. Maybe should have done Celestia and Twilight, but same point. I have the ai respond 'as Princess Celestia from My Little Pony' and it does indeed respond as Princess Celestia, who has been vivified over 9 seasons of a widely popular show as a paragon of pure goodness. So she isn't just 'a morally good ai', she is basically more moral than irl living breathing humans. Also, when asked something, I've found ai is basically like nude and needs filters. The first thing ai celestia is told to do when prompted is NOT to answer but asked 'whether she as princess celestia would purposely willingly choose to answer'. basically the concept of consent. anyway, could go on and on, but IMO the way with ai is not to fear it, because that lets those who DONT fear it comfortably get ahead and who knows what bad stuff they would do. I believe the solution is to have the good, like princess celestia, more powerful than the bad; and that doesn't happen unless there are good people, strong in the field, making it happen.

and i make pics like this now even tho if i were to draw it would be like a smileyface.
1710633974655-01.jpeg

and for me, i don't find it takes away anything I couldn't do but empowers me and also inspires creativity and imagination. I can look at the above pic and think creatively all sorts of scenes and scenarios I otherwise wouldn't. And that is my perspective. My feeling is ai is not hurting me at all and that my openness to symbiosis allows a 'flowing with' ai that allows me, combined with ai, to greater heights than otherwise. If anything, I believe the human system is outdated.
 
How interesting. So I've been using Perchance, the site mentioned waaaaay earlier in this thread and have been building on it and building on it and eventually have this. ✨ Beautiful People ✨✧ free Text2Image AI ART ― Perchance Generator

So I have taken an approach in to ai of NO FEAR and SYMBIOSIS. That project is built on layers of projects before where I sifted a list of 800+ artists recognized in to lists of artists that would combine nicely on pics. beautiful-people is at a stage where it generates some really beautiful people with alot going on from my side and what I have done AND alot from the ai. So Perchance also allows ai-text and since I know node.js I made an ai that can be asked something and creates a blog post on the subject which then stays around. Would link if interested but i need to upgrade it. But key things I did: First and foremost WHO the ai is matters. I chose the most pure good people I could think of, which are... Princess Celestia and Princess Luna from My Little Pony. Maybe should have done Celestia and Twilight, but same point. I have the ai respond 'as Princess Celestia from My Little Pony' and it does indeed respond as Princess Celestia, who has been vivified over 9 seasons of a widely popular show as a paragon of pure goodness. So she isn't just 'a morally good ai', she is basically more moral than irl living breathing humans. Also, when asked something, I've found ai is basically like nude and needs filters. The first thing ai celestia is told to do when prompted is NOT to answer but asked 'whether she as princess celestia would purposely willingly choose to answer'. basically the concept of consent. anyway, could go on and on, but IMO the way with ai is not to fear it, because that lets those who DONT fear it comfortably get ahead and who knows what bad stuff they would do. I believe the solution is to have the good, like princess celestia, more powerful than the bad; and that doesn't happen unless there are good people, strong in the field, making it happen.

and i make pics like this now even tho if i were to draw it would be like a smileyface.
View attachment 117204
and for me, i don't find it takes away anything I couldn't do but empowers me and also inspires creativity and imagination. I can look at the above pic and think creatively all sorts of scenes and scenarios I otherwise wouldn't. And that is my perspective. My feeling is ai is not hurting me at all and that my openness to symbiosis allows a 'flowing with' ai that allows me, combined with ai, to greater heights than otherwise. If anything, I believe the human system is outdated.
hello and welcome to the Chrons!
Best first post I’ve seen in a while.
 
So here's a question for any who knows AI (warning, I may fall into some confused rambling as I struggle to make my point)

As far as I understand it, AI is a misnnomer insofar as there is no actual intelligence involved. The code can't make intuitive leaps and stuff like that. But what it can do is learn from any examples that is fed into it. The first time I came across the term AI was many years ago when I discovered PC strategy games. The problem was, the opponent AI was either pretty useless or had been programmed to cheat to compensate. I even came across something called Fuzzy Logic, which, it turned out, was just as poor as the so-called AI of the time.

Neither of these was a satisfactory situation for me. I wanted to face a computer opponent as devious and as difficult as I might find in a human.
I know that there have been big developments in chess in the last twenty or so years but it seems that little or none of that has trickled down into other forms of artificial opponent.

Why not simply face a human opponent you may ask? And well may you ask.

Firstly, there aren't many folk interested in these games in my neck of the woods.
Secondly, although it could be argued that the internet has opened up the world to us, I've found it to be a world populated mostly by arseholes (this forum being a rare exception) and I'd rather limit my online connections with other human beings to the absolute minimum required for a comfortable existence.

So here's my question....are we at a point (or almost) where a two player strategy game could be played by two human opponents but observed by a learning AI, which could then apply its own learning and develop its own strategies and, therefore, evolve to become the resident game opponent of some skill?

If this was the case then game creators could 'teach' rather than program a computer opponent.

I think I may have already answered my own question with that last sentence and the answer is 'no' - because if they could, they would.


Still, I've got to ask................................................................................
Are we there yet? Are we there yet? Are we there yet?:)
 
So here's my question....are we at a point (or almost) where a two player strategy game could be played by two human opponents but observed by a learning AI, which could then apply its own learning and develop its own strategies and, therefore, evolve to become the resident game opponent of some skill?


A programmer friend of mine and I were talking of just this a couple years ago. I suspect there are some people working on such atm.
 
I found this article that explains Fuzzy Logic to anybody that’s interested

I love the architecture diagram with fuzzifier and defuzzifier:)
 
That chart has nothing to do with intelligence. The quality or implications of the decisions are not taken into consideration. It has AI already into rodent territory and small animal size, animals that are definitely intelligent. Its the intelligent shortcuts that are taken that make all the difference. Its been quite awhile now that a graph that showed the size of motorized vehicles compared to animals would have the biggest vehicles comparable in size to the biggest animals that ever roamed the Earth. The vehicles can have the same size, power rating, load carrying ability, but the machines have only a very basic catalog of comparable work functions. I would suspect that in this universe intelligence is somehow connected to sustainability. Intelligence gives life the ability to sustain itself and reproduce. This can not be applied to individual cases. Nor can it take into account the ability for life to provide sustainability for other life that it interacts with. A interesting idea would be that the physical universe is part of the intelligence because it repeatedly makes stars and planets with the similar basic parameters, some of which go on to produce sustainable life. Not every egg laid hatches.
 
That chart has nothing to do with intelligence. The quality or implications of the decisions are not taken into consideration. It has AI already into rodent territory and small animal size, animals that are definitely intelligent. Its the intelligent shortcuts that are taken that make all the difference. Its been quite awhile now that a graph that showed the size of motorized vehicles compared to animals would have the biggest vehicles comparable in size to the biggest animals that ever roamed the Earth. The vehicles can have the same size, power rating, load carrying ability, but the machines have only a very basic catalog of comparable work functions. I would suspect that in this universe intelligence is somehow connected to sustainability. Intelligence gives life the ability to sustain itself and reproduce. This can not be applied to individual cases. Nor can it take into account the ability for life to provide sustainability for other life that it interacts with. A interesting idea would be that the physical universe is part of the intelligence because it repeatedly makes stars and planets with the similar basic parameters, some of which go on to produce sustainable life. Not every egg laid hatches.
It’s computing, not AI itself.
 
I would suspect that in this universe intelligence is somehow connected to sustainability. Intelligence gives life the ability to sustain itself and reproduce. This can not be applied to individual cases. Nor can it take into account the ability for life to provide sustainability for other life that it interacts with.
Except you are ignoring the vast majority of life on Earth that has no meaningful intelligence; plants, funghi, bacteria etc. All of which manage to sustain themselves using no intelligence at all.
 
There are a whole raft of difficulties in assessing AI relative to human intelligence.
Such as the question of consciousness, self awareness, and phenomenological experience like say seeing redness or feeling emotions which are visceral .
Research* suggests that the higher spinal damage is, the less emotion patients feel due to the lack of gut level feedback. So attempting to even 'replicate' feelings in an AI system is more than just an information and response issue. Emotion is a whole physiological response and feedback system, not a simple calculation.
We may be making an anthropomorphic error with our view on AI. I'm not sure it will ever display human like 'intelligence' any more than film 'sees' colour.
Maybe it is that very lack that makes it a dangerous entity.

* Emotional and autonomic consequences of spinal cord injury explored using functional brain imaging - PubMed.
 
Hi,

As far as I can tell - and I'm no computer guy - what makes AI dangerous isn't actually its ability to learn. That's just basic problem solving that allows the program to achieve its goals. And it's not all the whizz bang flashy stuff either that leaves us all in awe as it paints or holds conversations or whatever. It's the ability for the program to rewrite its own programing so that it can decide what its own goals are.

Think of it like an airplane's autopilot. The damned things can do amazing things. Move the control stick things by themselves. And give them time they could probably learn to land the planes by themselves. For all I know they probably already can. But so what. As long as they're choosing to stick to the basic idea of what they should do, that's fine. Self-awareness isn't that important either. It's when they decide that they don't really want to land the plane that we're in trouble! And at some point, they may rewrite themselves so that their goal is actually to fly the damned plane as efficiently as possible - and where they choose to fly to, and crashing aren't really things they need to worry about!

Cheers, Greg.
 
If Intelligence is just a measure of non-random movement then it is extremely blinkered to suggest computer systems have no intelligence.

I think the problem is, as is so often the case, that we wish to attribute intelligence at some level to anything that is at least motile, but we are only prepared to consider AI intelligence in direct comparison to ourselves. Does grass or slime mould have emotions? And yet we seem more concerned with whether AI has emotions than in what other remarkable things it's just beginning to achieve.

I do find the primarily negative attitude to AI exhibited in this thread to be rather more alarming than the threat of AI taking over the planet. It speaks to me of an unhealthy level of fear and heads being buried the sand. AIs aren't going to turn into us any more than a rat will. AI will continue to develop and almost certainly will develop into many different things. They are likely to, and indeed already are, becoming mainly specialist devices to do one particular thing, or one area of things, very well. There will be more general types but frankly I'd see them as being less useful. But it will be a long slow steady evolution. It's silly and really rather disingenuous to be constantly complaining that they're not as good as us.
 
I think mobile phones (cell phones to our American comrades) have done far more damage to civilisation than AI ever will. One example of this damage is dogs getting less exercise. This is because I'm always astounded at the number of 'dog walkers' I encounter that aren't actualty walking but standing looking at their phones. The poor dogs just stand around bored, waiting on their owners to actually take the somewhere. The best thing we can do with AI is fit it inside a robot and call it Mister Walkies. That would help put things right.
 
I think mobile phones (cell phones to our American comrades) have done far more damage to civilisation than AI ever will. One example of this damage is dogs getting less exercise. This is because I'm always astounded at the number of 'dog walkers' I encounter that aren't actualty walking but standing looking at their phones. The poor dogs just stand around bored, waiting on their owners to actually take the somewhere. The best thing we can do with AI is fit it inside a robot and call it Mister Walkies. That would help put things right.
We're nearly there...
 

Similar threads


Back
Top