A.I. (general thread for any AI-related topics)

330px-Tammany_Ring,_Nast_crop.jpg
 
Computer systems have been defining truth, actually true and false, for some time. The basic model is that inputs are represented as individual values. Each of those values is then weighted and the result aggregated. Then cut off criteria are used to segment the final value into areas of true and false with an indeterminate range between the two.

A computer system will not go into an endless loop; it will provide an answer. The validity of the answer may be problematic, though.
Sometimes it will take 7,500,000 years to come to a conclusion. And sometimes that conclusion is 42.

Best to ask the right question.

Not exactly what I was thinking of but I have been struggling to think of an example to illustrate what I meant. Let's imagine two groups of humans living before 1697 when the Dutch explorer Willem de Vlamingh visited Australia. One group of humans in Europe think that all Swans are white. One group of humans in Australia think that all Swans are black. Both adamantly think what they believe is true, and since they have no contact with each other and have never seen the other type of Swan, they are both correct. Yet only one answer can be true, so one must be false.

How would a current "AI" determine what is the truth here?

If values are weighted and then the results aggregated then this would favour the largest group rather than the correct answer. So, would it decide that as their are more people living in Europe that all Swans are white, or would it instead decide that Swans are a pale shade of grey? Neither of those answers is the truth. Both are wrong.

Could it come to the real answer that Swans can be white and also can be black? Or, could it say that it doesn't know the answer?

And that is a relatively straightforward problem that only has two answers. When you instead ask it what the height of a Double Decker Bus is and there are hundreds of different kinds, different ways to measure, errors made in measurements...
 
Perhaps I am phrasing my inquiries wrong but my experience is that the answers I get sometimes can be that all swans are white, or all swans are black. I inform it that this is not true, that some swans are white and some are black and it apologizes, then comes up with another answer that might be the actual situation or not. Basically I need to know something about the answer to start with and by additional prompting I sometimes get information that confirms or denies what I am thinking. Human interaction is needed to stay on track. Another choice is to ask for a list of possible answers and I sort out the information I get back. In both examples AI is merely an advanced filter that can cut through pages of results that are only partially related to what I am researching.

One subject copilot has trouble with is astronomy. That's one subject you have to have a good idea of what you are looking for or you get the all swans are black or white depending on the source it is using.

I asked it about the helium leaks on the Starliner and it used sources that were about 5 weeks old. After being asked about "the current situation", it gave the old information again, but in a section after the answer called "in the news", it listed links to 3 news articles written by people posted in the last 24 hours, which did a much better job of answering questions about the past and present situation.

It might be that AI is using prompts from us that make it appear to be able to think. That was the original source of doubt in the early results of animal intelligence tests that the animals were giving the researchers what they wanted to see by being prompted by the researchers actions. The animals are now seen to be intelligent by researching them in their own natural environment. What is AIs natural environment?
 
Not exactly what I was thinking of but I have been struggling to think of an example to illustrate what I meant. Let's imagine two groups of humans living before 1697 when the Dutch explorer Willem de Vlamingh visited Australia. One group of humans in Europe think that all Swans are white. One group of humans in Australia think that all Swans are black. Both adamantly think what they believe is true, and since they have no contact with each other and have never seen the other type of Swan, they are both correct. Yet only one answer can be true, so one must be false.

How would a current "AI" determine what is the truth here?

If values are weighted and then the results aggregated then this would favour the largest group rather than the correct answer. So, would it decide that as their are more people living in Europe that all Swans are white, or would it instead decide that Swans are a pale shade of grey? Neither of those answers is the truth. Both are wrong.

Could it come to the real answer that Swans can be white and also can be black? Or, could it say that it doesn't know the answer?

And that is a relatively straightforward problem that only has two answers. When you instead ask it what the height of a Double Decker Bus is and there are hundreds of different kinds, different ways to measure, errors made in measurements...
The result is dependent upon the training data given to the AI. If it is trained on an equal number of white swans and black swans, the answer to the query, "What color are swans?" would likely be black or white. If the training data is heavily biased towards one color or the other, then the answer will be skewed to the more prevalent training data. It is highly unlikely that the system would come up with gray as an answer. However, this query is not one that I would consider true or false.

Another approach is to show the system pictures of black swans and white swans and ask, for each one, "Is this a swan?" That can be resolved into a truth.

This is one of the problems seen with current AI image recognition software. Matching faces to pictures is considerably more accurate for Caucasian males than for African American males. It also performs better when the question is "Does this one face match this one picture?" as in identity verification. It is more problematic, with a high number of false positives, when the question is, "Can you match this one face to someone in this large array of pictures?" This is something shown in a lot of TV police dramas, where a picture of a suspect is fed into a computer and it comes back with a match of some known bad guy. That process is much more failure prone in real life.
 
orry, but the recent IT 'outage' had nothing to do with Microsoft and the windows 3.1 stuff is apocryphal rumour spread from one report describing their systems as looking like they were designed on Windows 95. The problem was not with the operating system it was in an update from a company called CROWDSTRIKE (not cloudburst), an antivirus and security software company.

In the crowdstrike outage, or cloudburst, 8.5 million window devices were impacted. Not counting losses microsoft might experience, the loss so far is estimated to 5.4 billion dollars. Malaysia's digital minister called on CrowdStrike and Microsoft to consider compensating affected companies. Its doubtful anything will come of that.

The Crowdstrike outage is a direct result of the Rot Economy, where profit is absolutely valued over people. / Where's Your Ed At?

As it grew, I wrote the Rot Economy, and the Shareholder Supremacy, and many other pieces that speak to a larger problem in the tech industry — a complete misalignment in the incentives of most of the major tech companies, which have become less about building new technologies and selling them to people and more about capturing monopolies and gearing organizations to extract things through them.

Every problem you see is a result of a tech industry — from the people funding the earliest startups to the trillion-dollar juggernauts that dominate our lives — that is no longer focused on the creation of technology with a purpose, and organizations driven toward a purpose. Everything is about expressing growth, about showing how you will dominate an industry rather than serve it, about providing metrics that speak to the paradoxical notion that you'll grow forever without any consideration of how you'll live forever. Legacies are now subordinate to monopolies, current customers are subordinate to new customers, and "products" are considered a means to introduce a customer to a form of parasite designed to punish the user for even considering moving to competitor.

from here:
 
We are entering into engaging in politics and current affairs now, which I remind you we don't do here. In any case, the Crowdstrike problem seems to have had little to do with AI. (Even if it were to do with a more hands off approach to software testing, and how a reliance on AI might be the same, then that point has now been made.)
 
I have a habit of watching a couple of Asian channels on TV - one Korean and the other Japanese. I watch them because often there are documentaries on their art, culture, country or cuisine.

Yesterday I was watching the Korean channel (I know Koreans are early adopters of AI) and I got my first taste of an AI news bulletin. It was set up like a normal studio with an animated presenter and voice delivering the news. The animation was of a woman in a business suit standing next to a news desk. Her movements were stiff and robotic and the quality of rendering was poor. I'm not sure why it was called AI News because, for all I know, the voice could have been pre-recorded by a human and simply played back over a pre-rendered animation (which would not, in my opinion be AI as we know it).

If, however, all the gestures, movements and voice delivery was generated in near-real time by AI, it was pretty awful. Mitigation has to be made for the fact that early adoption would probably mean poor initial results but, if this is a taste of the future of newsreading, I'd pay to see a real presenter over a generated one.

One out of five stars.
 
It could be fake, a company trying to cash in on The AI generated parade or a real AI production getting bugs out by live testing. Either way there could be zero studio costs. If it is a real AI company, the news program would be their advertising vehicle and they could either be buying the air time themselves or supplying a low budget news service for whoever owns the air time the show is on. They would improve the show by seeing what people liked and didn't like, working only on what mattered to people. That way they could see where they could take shortcuts to save money. If it actually works, they could generate revenue by putting ads in even though the news is traditionally presented without ads. Did Max Headroom have commercials in it?

There is another strike in the entertainment industry, this time it is for the gaming industry workers from the actors to gaming employees with all kinds of different jobs. They want guarantees that they won't be replaced by AI productions. Even if AI generated news programs aren't made in the US, AI generated news programs could still be seen by originating in another country.
 
Not quite AI but your copier is designed to stop you copying banknotes by recognising a pattern added to all notes.
Particularly notice that it is the layout it detects, not the elements that makes the pattern, which can be dots, flowers or anything else.
Maybe I can duplicate the layout it and add it to my artwork to stop it being copied :unsure:
 
your copier is designed to stop you copying banknotes by recognising a pattern added to all notes.
There is something about the colour of banknotes too (Bank of England banknotes anyhow) that the photocopier cannot replicate (or at least that was so up until 10 years ago when there were a lot of fakes in circulation) and the colour was always slightly off. In addition, it can replicate the silver thread but it is only printed of the surface (doesn't run inside) and the watermark comes out much stronger as it is also printed on the surface (rather than only being visible when you hold it up to the light.) Those three things are how you spot a fake one. There is also stuff you can only see with an ultraviolet light, and if there are a lot of fakes, they will often all have the same serial number.
 
When I worked in catering (1970s), the cashier brought a tenner to the manager that had been photocopied. It was quite easy to tell it was a fake, because the idiot had only copied one side - the other was blank.
Apparently, when she challenged this master criminal, he just dropped everything and ran out of the door.
 
There is something about the colour of banknotes too (Bank of England banknotes anyhow) that the photocopier cannot replicate (or at least that was so up until 10 years ago when there were a lot of fakes in circulation) and the colour was always slightly off. In addition, it can replicate the silver thread but it is only printed of the surface (doesn't run inside) and the watermark comes out much stronger as it is also printed on the surface (rather than only being visible when you hold it up to the light.) Those three things are how you spot a fake one. There is also stuff you can only see with an ultraviolet light, and if there are a lot of fakes, they will often all have the same serial number.
This is fine if you have a good idea of what the notes should look like and this is why shops in England get so touchy about accepting notes from Scotland and Northern Ireland*. They simply wouldn't know how to distinguish a fake one from a real. Not a big deal for big department stores and supermarkets but a single fake £20 note taken in a small corner shop represents a significant amount of the day's profit; ie their personal income.

*For the non Brits you need to appreciate that in Scotland and Northern Ireland there are several banks that are licenced to print their own bank notes with their own designs, and they do look very different to the English ones.
 
It's just Speech to Text...
Okay it looks a lot neater and probably works far better, but I saw this sort of technology [admittedly tied to a chunky laptop and not a phone] in the mid/late 90s.
We had a student that used to wear "sunglasses" in lectures.
They saw what the Lecture said, what questions others asked and got a transcript of the lecture.
 
I do wish Google would stop making me click "No Thanks" to trying Gemini every other time I want to look something up.
 
Back to school?
There is a something called the Roche limit which shows how close a moon can get to a planet before it gets torn apart by the planet's tidal forces that deform the moon. The denser the moon, the closer it can orbit the planet before breaking apart. A less dense material deforms a lot easier than a very dense material. The Roche Limit does not depend on the size of the objects, but on the ratio of densities.

Two equal size bodies of the same mass can get much closer before reaching the Roche Limit. I used copilot to do the math for two earth size bodies of equal mass orbiting 20,000 miles apart. I figured copilot could do a better job tackling the math, which it did, but the explanation it gave in English didn't match the math. The math for this equation gets a lot simpler with equal size, equal mass bodies.


the Roche limit for two equal-sized bodies, such as Earth-sized planets, can be approximated by:
d≈2.44×R
where (R) is the radius of each planet. For Earth, the radius is about 6,371 kilometers. So, the Roche limit would be:
d≈2.44×6,371 km≈15,540 km
Since 20,000 miles is approximately 32,187 kilometers, this distance is well within the Roche limit. Therefore, the tidal forces would be extremely strong, and the planets would likely be torn apart.
If you have any more questions or need further clarification, feel free to ask!

I said copilot was wrong, and it replied:

The discrepancy occurred due to an error in my initial calculations and interpretation of the Roche limit. Here’s a breakdown of what happened:

Initial Calculation: I correctly calculated the Roche limit for two Earth-sized bodies using the formula (d \approx 2.44 \times R), where (R) is the radius of Earth (approximately 6,371 kilometers or 4,000 miles). This gave a Roche limit of approximately 15,540 kilometers (or 9,760 miles).
Conversion Error: When converting 20,000 miles to kilometers, I correctly noted that 20,000 miles is approximately 32,187 kilometers.
Misinterpretation: The error occurred when I incorrectly stated that 32,187 kilometers was within the Roche limit of 15,540 kilometers. In reality, 32,187 kilometers is well beyond the Roche limit, meaning the two Earth-sized bodies would not be torn apart at that distance.
I apologize for the confusion and any inconvenience caused by this mistake. Thank you for bringing it to my attention. If you have any more questions or need further clarification, feel free to ask!

According to copilot the earth moon would be traveling around 11,000 miles per hour.
 
A comment on what AI is ultimately composed of.
Hand powered tools amplify the power of our physical actions. To get more output, you typically use more people. If a hole was dug, it stayed in the place it was dug. People could physically copy the action and dig another hole in another location. But the holes could not appear out of thin air.

Electricity exponentially amplifies our actions. Originally it amplified our physical actions. It has now gone on to amplify our thoughts in real time. In the past messages written on paper had to be handed around for people to see them. Electricity and physical tools made it possible to write a message and have it copied a million times, say in a book, where it could eventually be delivered to a million people.

The information in the book did not automatically flow into people's minds. People had to physically get the book, then they had to open the book and read it. Information was distributed slowly and in a somewhat controlled manner as anyone could pick up the book and read it. To get more copies of the book, it had to be physically copied. The information in the book stayed the same no matter how much time passed. To change the information in the book, the book had to be rewritten and republished, then distributed all over again. Authors, publishers, and stores provided the information to the public.

Television brought effortless sight and easy to change sources but the information was still limited as to how many copies there were, when it could be accessed, and it was still time consuming to change the content as it had to be physically made again. The amount and variety of information was still restrained by physical constraints. Writers, actors, visual production studios, and television sets provided the information to the public.

In the age of the digital smorgasbord, (from Stephen Palmer's Substack) the information is transmitted directly into the mind by light waves. The information is constantly updated and as many copies are needed are instantly available. The information has transformed from something that was handled and controlled into a form of uncontrolled radiation that goes straight into the mind. There is no limit as to what kind of information is presented to the public. There is no limit as to who can provide the information. There is no limit as to how many times the information can be instantly replicated or changed. The information is available globally, unlike earlier sources of information that were strictly regional.

The information used by the Natural World is real, it is true, verifiable and reproducible, there is no debating if something is truth or not. The information is universal, not regional. We can view it by using math, chemistry, and physics.

In the virtual world we have created to describe our world as we see it, truth can be debated. The digital framework we have created to display this virtual world is made of two states, on or off, zero or one, true or false. The status of a condition is either true or false to either being equal to that condition or not equal to that condition.

Individual data units are composed of sets of statements, which state that each information placeholder is either true or false to a stated condition. Some individual data units have more false components than true components, such as 001, where 1 indicates a value equal to the true state, and the zero's are values that are not equal to the true state. Is it just coincidence that in the digital world, data units with more false statements than true statements can be used to describe reality. Is it the ultimate slippery slope?
 

Similar threads


Back
Top