A.I. (general thread for any AI-related topics)

Peterson (I don't know anything about Roemmele). He always seems to be pushing political opinions while claiming to be a scientist. To be honest, I'm suspicious of any commentator who owes most of their fame to the internet, which is probably a bit unfair.
Peterson does like to espouse his political opinions and although they are only a very tiny fraction of his output they are the most reported and the most polarising. A lot of people have no time for him at all.

I find a lot of his work and interviews fascinating and went to his lecture (exploring the challenges of life) here in Sydney last year, which was quite amazing.

I hope his talk with Roemmele was nonsense, because their discussion described the possibilities of AI in a way I found quite disturbing, mainly as they sound plausible. I'm thinking 1984 meets 2001!
 
This is an older survey conducted in the US. So I say, lets just conclude that all job losses are caused by AI and be done with it.

48% of Americans support universal basic income for workers displaced by A.I.​

The American public is split on whether to provide a “safety net” to workers displaced by advancements in artificial intelligence.
 
Sci-fi author 'writes' 97 AI-generated tales in nine months and other AI news

Here's the full brief about the author

Sci-fi author Tim Boucher has produced more than 90 stories in nine months, using ChatGPT and the Claude AI assistant.

Boucher, an ML-using artist and writer, claims to have made nearly $2,000 selling 574 copies of the 97 works.

Each book in his "AI Lore" series is between 2,000 to 5,000 words long - closer to an essay than a novel. They are interspersed with around 40 to 140 pictures, and take roughly six to eight hours to complete, he told Newsweek.

Boucher's superhuman output is down to the use of AI software. He uses Midjourney to create the images, and OpenAI's ChatGPT and Anthropic's Claude to generate text to brainstorm ideas and write stories.

"To those critics who think a 2,000- to 5,000-word written work is 'just' a short story and not a real book, I'd say that these 'not real books' have shown impressive returns for a small, extremely niche indie publisher with very little promotion and basically no overhead," he argued.

Boucher said the technology's current limitations make it more difficult to produce longer passages of text that follow a coherent storyline. Despite these challenges, he said AI has positively impacted his creativity.

AI has divided the sci-fi community. Editors of the Clarkesworld Magazine, for example, consider short stories written by machines to be spam.

Selling an average of six copies per book to make a couple of hundred bucks a month may not be the money fountain authors were hoping AI could provide.

As ever with these articles, the comments provide some useful, intelligent insights
Like this

“Impressive returns”, or not.​

Running the numbers, if I’ve understood it correctly:

97 ‘books’, each of which takes 6-8 hours to write (call it 7) means 679 hours of work.

$2000 income from that means each hour’s earnings is about $2.95.

At that productivity rate I might question whether it’s actually worth it!
 
Here are the new AI- related threads that were started in May:

Seven in the first post of this thread, seven in April, and another seven in May.
At this rate, by the end of next year we’ll have 150-200 such threads.
 
Here are the new AI- related threads that were started in May:

Seven in the first post of this thread, seven in April, and another seven in May.
At this rate, by the end of next year we’ll have 150-200 such threads.
And that is how AI will take over, one thread at a time.
 
And that’s why I suggested to Brian (three months ago) that perhaps there should be a dedicated AI subforum for them all.
I'm not sure how this site works, but if everything could be consolidated into two competing threads.
1. AI is dooming us all.
2. AI is the savior we've all been waiting for.

That could be pretty interesting.
 
Then link them and see who survives
This site does seem to be on the hunt for AI evils.

I just saw an article about a depression helpline that will start to use AI. For certain applications, such as the illusion of companionship that calling a helpline provides, AI, properly programmed, is a great tool. Particularly since there are never enough volunteers to fill the needs. And, frankly, I don't mind crushing the $4/minute talk-line business.
 

ChatGPT Accused Mayor Of Bribery Conviction, Faces Potential Defamation Claim​

Do robots dream of libelously claiming you molest electric sheep?​

It was all fun and games when ChatGPT proclaimed Clarence Thomas the hero of same-sex equality or botching legal research memos by inventing fake law, but now the public-facing AI tool’s penchant for hallucination has earned its creators a threatened lawsuit.

Australian regional mayor Brian Hood once worked for a subsidiary of the Reserve Bank of Australia and blew the whistle on a bribery scheme. But since tools like ChatGPT haven’t mastered contextual nuance, Hood’s attorneys claim the system spit out the claim that Hood went to prison for bribery as opposed to being the guy who notified authorities. Hood’s team gave OpenAI a month to cure the problem or face a suit.



 
Screenshot_20230617-063232.gif
 
While autonomous weapons systems already exist (in air defence where intercepting missiles can select incoming missiles or 'loitering' munitions as targets) this is a highly restricted field. That field might be about to expand.

Project Tempest (a UK, Italy Japan collaboration) is the starting point in developing a sixth generation fighter. It has been touted that A.I. will play a large part in weaponry and target selection in order to lift a lot of the stress load from the pilot. While this in itself makes sense, it is this aspect of artificial intelligence that worries me the most. Add to the mix that each Tempest fighter could also control a number drones that will fight alongside the main craft. These also will rely heavily on A.I. - meaning that a whole squadron of planes could only have minimal human interaction and, in my opinion, I'm sure the tech wizards will build in an extra level of A.I. just in case the pilot becomes incapacitated. This project is meant to keep control of the skies for around six decades before the need for gen 7 development.

Here's a quote from the government web page
MBDA unveiled its concept for a weapon effects management system, to aid the coordination of all available weapons in the battle space using Artificial Intelligence and Machine Learning enhanced software.

Sounds a bit too Skynet to me...

Rolls Royce have already started engine testing. The UK government is expecting to invest around £10 billion in the next decade with Tempest due to arrive around 2035.

Some further info:


 
The quote is rather vague, but from what I have read from the posted articles, the AI aspect is to protect the aircraft from incoming threats.

Original quote from the UK government web page:
"MBDA unveiled its concept for a weapon effects management system, to aid the coordination of all available weapons in the battle space using Artificial Intelligence and Machine Learning enhanced software."​

From the Royal Air Force, The Tech web page, the term weapon effects management is further defined.
"Effectors will be used to protect Tempest by helping to assess and evaluate incoming threats, and then in managing the deployment of the appropriate method to defeat it."​

Without seeing further details, I would assume that the AI coordination of "all available weapons" is constrained to defending aircraft from attack. This appears to be a fairly limited defensive capability and is a far cry from some self-directed offensive attack capability. The concern that I would see is in friend vs. foe identification (transponder codes, perhaps?). Would a second aircraft coming to the aid of one under attack be misidentified as part of the attack?
 
A quote from the Team Tempest page:
Tempest needs to support existing weapons, planned weapons, and the weapons of the future. For instance, the next generation Beyond Visual Range Air-to-Air Missile Meteor and the network enabled precision surface attack missiles of the SPEAR family of weapons, will be optimised for Tempest.


I read 'all available weapons' a bit differently. I can see an enhanced version of an attack missile with A.I. perhaps being used for autonomous target re-selection if it detects a better or more important target once launced.

As for transponders, there's a lot of evidence that Russian aircraft probing UK airspace in recent months (this has increased massively in the last year) have been flying with their transponders switched off. I'd guess that they wouldn't be active on aircraft of either side in a conflict to avoid tracking.

But, to me, the biggest reason for using A.I. in an offensive capability is the fear that if we do not then we might lag behind our potential enemies who do go on to develop offensive A.I. capabilities.

Electronic warfare already plays a large part in modern conflict and I wonder how feasible it might be to hack into a network of fighter and supporting drones. Instead of taking control or planting a virus, perhaps in the future it might be about a kind of Cuckoo A. I. that shoves the old system out of the 'nest' and takes over all connected aircraft, turning them on their owners. Maybe I've just stepped into the realms of Science Fiction here:)
 
Not to get tooo political but which should we worry more about having authority in society?

Artificial Intelligence.
Natural Human Avarice and Stupidity.
 

Similar threads


Back
Top