AI is outperforming humans in both IQ and creativity in 2021

Do you know where it says "remembered"? I thought it was scanning what they were looking at while they were looking.
That would be impossible since we don't even understand how memory works at this point.
 
That would be impossible since we don't even understand how memory works at this point.
That's what I thought, but I might have misread. However, it might be possible to detect an image from memory that a person is currently concentrating on.

But it is much 'easier' to detect incoming sensory data.
 
The title of the article, "This A.I. Used Brain Scans to Recreate Images People Saw" is slightly misleading. The process is actually creating images somewhat similar to what the people saw. Please read the summary of the process at the bottom of the linked article. I am not sure why, but the summary that I tried to post previously has strikeout characters throughout. This is a two pass process. What is derived directly from an AI trained with MRI scans provides fuzzy details. There is a second AI that associates MRI scans with a textual description of the image. The fuzzy details and the textual description are then used to generate the displayed pictures.

From the article, here is the original paper (I have not read through this yet, my summary is derived from the article), https://www.biorxiv.org/content/10.1101/2022.11.18.517004v2.full.pdf
 
Do not get too overwhelmed by assumptions of AI omniscience.
A current article in the NYT deals with submissions to three SF mags with perhaps the best current reputations: Clarkesworld, F&SF, and Asimov's.
Chatbots are deluging each of them with AI written stories. As of the date of the article Clarke's had received 700 human submissions and 500 chatboxes that month. Asimov's received somewhere around 300 AI submissions the previous month, F&SF reported a similar flood.

However the editors were unanimous in their opinion of the submissions.
The writing is also “bad in spectacular ways,” Mr. Clarke (of Clarkesworld) said.
“The people doing this by and large don’t have any real concept of how to tell a story, and neither do any kind of A.I.,” Ms. Williams (of Asimov's) “You don’t have to finish the first sentence to know it’s not going to be a readable story.”
“It does not sound like natural storytelling,” (Ms Thomas of F&SF). “There are very strange glitches and things that make it obvious that it’s robotic.” Ms. Thomas reported that she had been permanently banning anyone who submitted chatbot-generated work.
All three report huge problems with even spending the time to open and discard the flood of junk.

The article is from the 2/23/2023 NYT. HERE is a link if you can open it.
 
That's what I thought, but I might have misread. However, it might be possible to detect an image from memory that a person is currently concentrating on.

But it is much 'easier' to detect incoming sensory data.
From what I understand the current theory of memory is that it's not a 'snapshot' of an event, but rather an assembly of data. That's why memory is prone to degradation over time, since the assembly process can be shaped by other factors (such as outside influences). For example, I was talking to a friend recently about a mechanic who built the motors for our drag cars. I was certain that I took a specific road and the shop was on my left. I went to google maps, re-traced my route and had it backwards. I actually took a whole different route, the shop being on my right. Plus, there are no 'images' in memory, the images are created by our brains, the only thing that you could measure would be brain activity. For this type of technology to work, we'd have to understand the 'hard problem' in Psychology: how does the brain turn electro/chemical impulses into conscious experience? Again, I don't know if that's even possible to understand; and if it is possible we're a long way away from being able to do so.
 
From what I understand the current theory of memory is that it's not a 'snapshot' of an event, but rather an assembly of data. That's why memory is prone to degradation over time, since the assembly process can be shaped by other factors (such as outside influences). For example, I was talking to a friend recently about a mechanic who built the motors for our drag cars. I was certain that I took a specific road and the shop was on my left. I went to google maps, re-traced my route and had it backwards. I actually took a whole different route, the shop being on my right. Plus, there are no 'images' in memory, the images are created by our brains, the only thing that you could measure would be brain activity. For this type of technology to work, we'd have to understand the 'hard problem' in Psychology: how does the brain turn electro/chemical impulses into conscious experience? Again, I don't know if that's even possible to understand; and if it is possible we're a long way away from being able to do so.
But what I was getting at is that you can "form" an image in your mind from memory, and that image may be detectable. That doesn't mean the image is accurate, just that it has origins in stored experience rather than pure imagination.
 
Do you know where it says "remembered"? I thought it was scanning what they were looking at while they were looking.
That would be impossible since we don't even understand how memory works at this point.
But what I was getting at is that you can "form" an image in your mind from memory, and that image may be detectable. That doesn't mean the image is accurate, just that it has origins in stored experience rather than pure imagination.
My reply was to @Christine Wheelwright who in turn, I thought anyhow, was replying to the comments about being able to read a dead person's mind and see their murderer.

So, I agree with you that as we don't yet understand memory then we can't possibly read memories yet, but what we "see" is not actually, what our eyes observe. It is already interpretated, and missing information added from blind spots, and processed by parts of the brain. We can prove that by fooling the brain with illusions. So, these images must be the brain's interpretations of what the eye has seen. They are not a photograph made "pixel by pixel" as was what she said.
 
So, I agree with you that as we don't yet understand memory then we can't possibly read memories yet, but what we "see" is not actually, what our eyes observe. It is already interpretated, and missing information added from blind spots, and processed by parts of the brain. We can prove that by fooling the brain with illusions. So, these images must be the brain's interpretations of what the eye has seen. They are not a photograph made "pixel by pixel" as was what she said.
That is all true, but still night and day different from a remembered image. Sensory information, processed or not, is real time and exists outside of the structures that form memories. The article appears (to me) to be saying that they were reading interpreted sensory impulses as they were happening.
 
But what I was getting at is that you can "form" an image in your mind from memory, and that image may be detectable. That doesn't mean the image is accurate, just that it has origins in stored experience rather than pure imagination.
In theory you could (possibly) use computers to create an image of what the person was remembering. However, we'd have to know a lot more about the brain to know how to convert brain activity into actual images on a screen. Brain Scans, such as fMRI only measure blood flow, thus can show which areas of the brain are more active than others. However, I assume we'd need some other type of technology that would record directly from Neurons, then convert that data into images. Also, it's hard to know how much of the memory is accurate or 'imagination'. For example, if someone asks you about an event, how they ask the questions can influence how people remember the event. After taking a few hits off my dry herb vaporizer ... I had an idea last night that could allow people to see what others are seeing. If there was a way to look at the transduction phase inside the eye (rods and cones on retina) where electromagnetic radiation is converted into neural signals then 'work backwards' to learn the state (for lack of a better term) of the light that interacted with them, that data could then be sent anywhere where a computer could produce the exact same light (via a computer screen) and you'd see exactly what they were seeing. I suppose that would fall under Psychophysics and Computer Science. If that were possible it would take clandestine espionage to a whole new level, since you wouldn't have to have a camera recording the data. I suppose some form of Nano Tech could accomplish this. I'm reading too much damn Sci-Fi!!
 
To act as HR, or to use its personal income to pay employees? Where would it get income?

Where would AI get income?

Making cuckoo clocks perhaps. - It would need people to carry material between the 3d printers and CNC machines.
Maybe inventing a new Cryptocurrency and then taking the cash from those sales to take control of the stock market. Possibly coordinating with other AI and "dumb" systems to control key industries.
Developing web pages on wordpress for small businesses.
Writing ad jingles.

Robin Williams offers a potential view of this in Bicentennial Man.

Lots of movies and books (and real-life criminal investigations & business books) offer examples of creating shell corporations to control money while hiding the true identity of the owner. Seems that an AI could do that.

But all of that is beside the point of the base question.

Should AI be allowed to have human employees?
 
Where would it get income?
0F195AC1-E3BC-48DB-B2F5-2BF67392810A.png
 
Where would AI get income?

Making cuckoo clocks perhaps. - It would need people to carry material between the 3d printers and CNC machines.
Maybe inventing a new Cryptocurrency and then taking the cash from those sales to take control of the stock market. Possibly coordinating with other AI and "dumb" systems to control key industries.
Developing web pages on wordpress for small businesses.
Writing ad jingles.

Robin Williams offers a potential view of this in Bicentennial Man.

Lots of movies and books (and real-life criminal investigations & business books) offer examples of creating shell corporations to control money while hiding the true identity of the owner. Seems that an AI could do that.

But all of that is beside the point of the base question.

Should AI be allowed to have human employees?
It isn't beside the point. The point is granting AI permission to participate in the economy. It can't hire employees without money. Who gave it access to money and banking? It's the same question, really.
 
For tech industries, the first level review of resumes (CVs) has already become a software algorithm looking for keywords. Having an AI scan a job description and come up with a match criteria doesn't seem like very much of a jump in technology.
Where would AI get money?

U.S. Supreme Court asked to decide if AI can be a patent 'inventor' - Article Here

With patents come royalties. What will an AI do with its royalties? Buy more RAM? - Gotta hire someone to install it.
 
Where would AI get money?

U.S. Supreme Court asked to decide if AI can be a patent 'inventor' - Article Here

With patents come royalties. What will an AI do with its royalties? Buy more RAM? - Gotta hire someone to install it.
I believe patents have names on them, even when they are owned by a corporation. But the payout doesn't go to the person whose name is on the patent unless that was the way the employment contract reads.

In other words, an AI can receive credit even though the patent and the AI are 100% owned by a corporation. So no money - unless someone wants to give the AI money (which they can do anytime, patent or not) and then give the AI access to commerce so the AI can buy things or employee people.

Think about a famous youtube dog. What do they own, what do they spend on?
 
There is also the DoNotPay AI "Lawyer" in the US who has been drafting letters and answers to questions for defendents against minor charges, such as speeding fines. After threats of jail (for the defendent not the AI) I think it has been pulled from defending it's first case in court. There has also been discussion of whether it can call itself a Lawyer without having qualifications, but to my mind that's not really an argument, because if it is good enough then it could easily be given an honourary degree, or it could even sit exams (presumably marked by another independent AI) :cautious:

Clearly, human Lawyers have been rattled by this. But it could certainly generate a generous income, the question is only whether humans would allow it to keep it? Also if we don't allow AI's to keep money they have earned (or at least the profit after billing them for electricity and rent) isn't this a form of slavery? (There is no definition of slavery upon which everyone agrees, but it ususally includes the word "human" within it, simply because it was written by humans for humans.) Star Trek TNG covered this with Data in "The Measure of a Man".
 
Also if we don't allow AI's to keep money they have earned (or at least the profit after billing them for electricity and rent) isn't this a form of slavery? (There is no definition of slavery upon which everyone agrees, but it ususally includes the word "human" within it, simply because it was written by humans for humans.) Star Trek TNG covered this with Data in "The Measure of a Man".
Then domesticated animals are slaves. Non-human entities performing work for no pay is not a new and unique situation to our society.
 

Back
Top