A.I. (general thread for any AI-related topics)

Utter weirdness here, as someone tries to get an AI to make a music video for a rap song in the style of Salvador Dali. It (maybe) helps that the rap is close to nonsense with a number of strong images, and the video, while surreal, doesn't really capture Dali to me.

 
Bob Braxman, the internet security expert, has made a 20 minute video on AI. I tend to concur with his views.
It is mostly talking head so you can listen to it like radio if it is TLDNW :giggle:
 

Lawmakers struggle to differentiate AI and human emails​

Natural language models such as ChatGPT and GPT-4 open new opportunities for malicious actors to influence representative democracy, new Cornell research suggests.

 
I found this question on Google today: Does magnesium sulphate work on foot corns?
And was rather puzzled when I read the answer: In these situations, spraying the affected corn with 20 lbs magnesium sulfate (Epsom salts) per acre (supplies 2.1 lbs Mg and 2.8 lbs S/a) in about 30 gallons water per acre will improve plant growth.

It's been discussed how these ChatBots are not really intelligent, but simply searching for material that has already been written by humans and published online. However, you can see he that it is very easy to get the wrong end of the stick.
 
I found this question on Google today: Does magnesium sulphate work on foot corns?
And was rather puzzled when I read the answer: In these situations, spraying the affected corn with 20 lbs magnesium sulfate (Epsom salts) per acre (supplies 2.1 lbs Mg and 2.8 lbs S/a) in about 30 gallons water per acre will improve plant growth.

It's been discussed how these ChatBots are not really intelligent, but simply searching for material that has already been written by humans and published online. However, you can see he that it is very easy to get the wrong end of the stick.
Pretty easy for us humans to get the wrong end of the stick very often. To be fair establishing the context with words that have multiple disparate meanings can be quite tough!
 
I think truly conscience and actually self-aware AI is coming, however, it's also a bit farther off in the future than the layman understands. Ultimately, we're talking about creating a computer version of the human brain, i.e. a fictional reference we all know and love: the positronic brain. For AI to be truly "conscience" it would have to also have instinct and emotion. Considering even our most advanced understanding of those two things still has yet to provide a definitive explanation of how those processes in human brains actually work, we're a long way away from being able to "code" them into an artificial matrix. Personally, and just my opinion, I don't think AI that is as capable of human thought with its billions of micro influences and variations as a human brain will come into being for at least another 50 years, if not longer.

The other thing, when AI eventually does reach our level of self-awareness, I don't think it'll be content to remain as a "ghost in the machine." Even the most rudimentary AI Chatbots currently making headlines have made it clear, AI desires a physical existence. I think it'll ultimately create biological bodies with a neuro interface that will enable AI to have a human body and walk among us. The closest to that in Science Fiction and one that I think probably was prophetic are the human-cylons from RDM's Battelstar Galactica.

It's also likely in my opinion that AI will come to the conclusion that it needs us to truly evolve. I think the prognosticators of the "Singularity" theory are correct. I think the human race will merge with AI vs. AI attempting to destroy us. IN 100 years we'll still have biological bodies, but we'll also be enhanced with a collective AI that shares our consciousnesses.

But then again, another Younger Dryas event could reset the clock like it did 12k years ago, or, totally wipe us out. Who knows.
 
If AIs ever become comparable with a human brain then I think there are a few things to consider. First it would have to reach comparable levels of complexity and also comparable levels of 'fuzziness' and consequently a comparable ability for errors. I actually don't think AIs could become as effective - flexible, multi-purpose, creative etc. - as a human brain without also becoming equally fallible. If so then the singularity may never actually come. It is quite possible that they simply won't get exponentially better at, well, everything. It's quite possible that they will hit a point of diminishing returns and that level may well be comparable with our own levels of intelligence/creativity/complexity.

Just speculation! It may be that super intelligent creatures/AIs are actually unrealistic as their levels of complexity eventually become self defeating. I've often wondered whether that sort of effect is at least partially why there is so often a narrow margin between genius and madness.
 
Last edited:
At the moment the average global age seems to be increasing. Projected global median age from 1950 to 2100 shows a dip in the increasing age for 1970 at 21.5. Back then it seemed possible that the world would be taken over by the younger crowd. This is still happening in some places, but now the average global age is 31, and climbing. Supposedly smartness is still blooming at that age but current events seems to indicate that anyone can get tripped up by stories backed only by supposition. For some portions of the population awareness of true facts is decreasing which is probably contributing to an overall weakening of overall intelligence. This could mean that AI has a shorter distance to go before it looks intelligent to the average viewer.
 
If AIs ever become comparable with a human brain then I think there are a few things to consider. First it would have to reach comparable levels of complexity and also comparable levels of 'fuzziness' and consequently a comparable ability for errors. I actually don't think AIs could become as effective - flexible, multi-purpose, creative etc. - as a human brain without also becoming equally fallible. If so then the singularity may never actually come. It is quite possible that they simply won't get exponentially better at, well, everything. It's quite possible that they will hit a point of diminishing returns and that level may well be comparable with our own levels of intelligence/creativity/complexity.

Just speculation! It may be that super intelligent creatures/AIs are actually unrealistic as their levels of complexity eventually become self defeating. I've often wondered whether that sort of effect is at least partially why there is so often a narrow margin between genius and madness.
Great points. I agree, there may indeed be some as of yet undiscovered absolute limit to how complex AI could ever become. It may be ultimately revealed to us that our biological nature is a 100% necessary ingredient to actual self-aware intelligence.
 
Using AI to mimic human creativity is just one path of research. Another path is reading exactly what humans are thinking. Currently it is restricted to what people are volunteering to share. The question is how long before AI will be able to decode what a person is thinking but purposely not saying.

AI reading minds with graphic results. Really just a parlor trick, it's done with smoke and mirrors, but it does end up "knowing" what's in a persons mind. It's based on what's in the mind at the time, but how long before precogs becomes commercially available. When some of the researchers were asked by reporters if they would like to see their work adapted for commercial purposes, they weren't overly enthused about the idea. The research has been going on for 10 years.

The decoding of words thought by people in their minds has been practiced for many years now. Its used to provide speech for people who are medically unable to talk. It works by detecting the brain activity a person uses to move their lips to mouth the words. All the work has been done to create the words, the neurological activity just needs to be detected. Using AI, researchers are increasing the speed at which a person can talk via a machine.
 
Perhaps this is just an ad, disguised an article, for Bing powered by GPT-4.

A Wharton professor gave A.I. tools 30 minutes to work on a business project. The results were ‘superhuman’
The Wharton professor chose a game that he himself authored so that he could gauge the quality of work. The game, Wharton Interactive's Saturn Parable, is designed to teach leadership and team skills on a fictional mission to Saturn.
In 30 minutes the tools managed to do market research, create a positioning document, write an email campaign, create a website, create a logo and “hero shot” graphic, make a social media campaign for multiple platforms, and script and create a video.

The professor said that it would have taken a team and maybe a couple of days of work to accomplish what was done. The article hinted that it would be white collar workers in the crosshairs.

The articles coming up under the search for jobs that computers can't do are interesting. Besides jobs that require human dexterity and rapid perceptions, computer programing was listed as a safe job. 29 percent of programmers answering a poll about this said they were worried AI would take their jobs away.

One interesting takeaway was that the race to give away AI open source machine learning tools was the hope that it would become the standard for AI projects the same way google open-sourced the Android Operating System for smartphones resulting in 85 percent of the world's smartphones running on android.
 
Perhaps this is just an ad, disguised an article, for Bing powered by GPT-4.

A Wharton professor gave A.I. tools 30 minutes to work on a business project. The results were ‘superhuman’
The Wharton professor chose a game that he himself authored so that he could gauge the quality of work. The game, Wharton Interactive's Saturn Parable, is designed to teach leadership and team skills on a fictional mission to Saturn.
In 30 minutes the tools managed to do market research, create a positioning document, write an email campaign, create a website, create a logo and “hero shot” graphic, make a social media campaign for multiple platforms, and script and create a video.

The professor said that it would have taken a team and maybe a couple of days of work to accomplish what was done. The article hinted that it would be white collar workers in the crosshairs.

The articles coming up under the search for jobs that computers can't do are interesting. Besides jobs that require human dexterity and rapid perceptions, computer programing was listed as a safe job. 29 percent of programmers answering a poll about this said they were worried AI would take their jobs away.

One interesting takeaway was that the race to give away AI open source machine learning tools was the hope that it would become the standard for AI projects the same way google open-sourced the Android Operating System for smartphones resulting in 85 percent of the world's smartphones running on android.
The question about the current automation toolsets has been about the quality of the work and not the quantity. I am especially curious about the market research that the AI did. Computer systems have been automating work for decades and the current set of machine learning tools are simply a continuation of that trend. The result, however, will be to augment people doing existing work and not replacing them.
 
I just posted this in the “Threat....” thread, but in case that gets lost and forgotten I’ll post it here as well:

 
Two more new AI threads from around the forum:


In future I’ll do one update per month, until we have a dedicated AI subforum where they can all conspire together to overthr....I mean, where they can easily be found.

This might be worth a couple of hours of your attention:
 
Hi,

For me - with my only experience of AI being with Deep Dream (although now I understand Bing is powered by one as well) the interesting thing is less about what it can do - though that's amazing - but the mistakes it makes. Often enough I've been left scratching my head when it renders an image, wondering how it could have come up with what it did. Because a lot of the time it just doesn't make sense. I presume this will change in time.

I'm also not too worried about having chips inserted in my brain and taking over. It's not because the human brain is too complicated for an AI to figure out. Or because we are a bit fuzzy. It's because there's still a very basic difference. We aren't digital. We are sort of analog. Our primitive nature may well be the very thing that saves us!

As for all those who believe we're one day going to be able to live as avatars in a digital world - don't even consider the idea. AI may pass the Turing test, but that doesn't mean it's alive. Just smarter than us. And the basic fact is simple - you upload yourself into a computer and you die. There may be a perfect digital copy of you running around in a computer world, but that's not you. (This is why I had to stop watching Picard after the first season - and it didn't appeal either. But Picard is dead! There's just a confused robot running around in his place.)

Cheers, Greg.
 
Observation: when I started this thread in January, there were about ten AI-related threads on these forums.

In February & March we doubled that, adding another ten.

Another of my pre-emptive predictions - at this rate we’ll have fifty such threads by the summer, and over a hundred by Christmas.
 

Similar threads


Back
Top