A.I. (general thread for any AI-related topics)

it is a horrible research tool
I agree entirely with that article. I have seen a local history piece written by it and it was full of absolute rubbish. All they do is troll the internet for material, and everyone knows that the internet is full of unreferenced, unsubstantiated, uncollaborated drivel or out of date information, if not actual outright lies and damn lies. It has no method of discerning whether what it gathers up is true, or even realistic. Even if you didn't know something was correct, a human could at least make a guess whether something was at all possible. So, I would stay well clear of using them for any academic research. Unfortunately, I think we will see more News reports and lazily researched magazine articles written with them, and since they will reference each other, untruths will get repeated and spread even more widely. This is concerning on many different levels.
 
Lockheed Martin has been given a contract to develop AI for the Defense Advanced Research Projects Agency (DARPA). It's a bit difficult to read through all the jargon and fancyspeak but the project ......

aims to provide advanced Modeling and Simulation (M&S) approaches and dominant AI agents for live, multi-ship, beyond visual range (BVR) missions. It is a critical step in prioritizing and investing in breakthrough technologies for national security and to meet the evolving needs of customers.

Maybe somebody can correct me if I'm reading it wrong but I think it will be using AI tools in an almost predictive sense to look at what is needed to achieve victory in a given scenario.

The article also mentions surrogate models. I had to look that one up and I think these are used (in the context of machine learning) to produce more simplified versions of complex simulations. I would imagine (but don't know) that this could speed up the data gathering process if you assume that a more complex model will take longer to produce the same results. I could see how this surrogate model idea could be useful in a real-time environment like the combat information centre of a ship where seconds might mean the difference between intercepting or failing to intercept an incoming missile.

That's my take. Here are the articles for anybody more in the know than I am.


 
AI cannot discern fact from fiction, deception from truth, moral from immoral or - most importantly - right from wrong. It has access to a wealth of knowledge without any real sense of what to do with it.

Yes, we should use it - with caution. But we should never trust it, or reach a situation where we begin to rely upon it.
 
Isn't a 19th century water powered loom really a form of AI?
AI is a computer program. The loom is one of the first examples of a dot matrix printer hooked up to a punch card reader. The loom machine was the algorithm, the punch cards that determined the pattern the loom made was the data. Lady Lovelace knew that it was a digital display device but was unable to get her ideas out to where other people could make use of them. Humanity was so close but all that came out of it was corporation T-shirts.
 
AI cannot discern fact from fiction, deception from truth, moral from immoral or - most importantly - right from wrong. It has access to a wealth of knowledge without any real sense of what to do with it.

Yes, we should use it - with caution. But we should never trust it, or reach a situation where we begin to rely upon it.
That all sounds like a fairly accurate description of a significant proportion of the human race! ;)
 
AI is a computer program. The loom is one of the first examples of a dot matrix printer hooked up to a punch card reader. The loom machine was the algorithm, the punch cards that determined the pattern the loom made was the data. Lady Lovelace knew that it was a digital display device but was unable to get her ideas out to where other people could make use of them. Humanity was so close but all that came out of it was corporation T-shirts.
Very cleverly there is no meaningful generally accepted definition of "AI." What is "artificial intelligence." What makes AI different from any other type of computer algorithm?

As a simplistic example, a medieval cart horse possessed far greater intelligence and judgement than any "self-driving" algorithm even if the horse was fairly limited in the number of places it could be taught to automatically drive to.
 
Very cleverly there is no meaningful generally accepted definition of "AI."

1) its only advertising, hype, just words to drum up dollars, like the old cigarette commercials, nothing to do with reality
2) its a computer program, nothing more.
3) Its designed to make it look like it is intelligent but it is totally fake.
4) when used properly it works quite well, but improperly, 90% of the time, it is highly inefficient, and a supreme energy waster
5) it gives multiple answers, you have to choose the right one, and you have to tell it to give you multiple answers
6) when you take a test and you answer with multiple answers with the idea that the teacher picks the correct answer you get an F.
 
Very cleverly there is no meaningful generally accepted definition of "AI." What is "artificial intelligence." What makes AI different from any other type of computer algorithm?

As a simplistic example, a medieval cart horse possessed far greater intelligence and judgement than any "self-driving" algorithm even if the horse was fairly limited in the number of places it could be taught to automatically drive to.
I think the difference is that AI can search and process external data sources rather than being given a data set.
The rocket engine above was made by searching the 'internet of everything' and designing from scratch rather than being given a pre existing blueprint.
The distinguishing AI feature is learning rather than simply executing.
 
What makes AI different from any other type of computer algorithm?
That's a fundamental mistake to make. AIs are not based around algorithms nor are they 'programs' as such they are based around models. Very different things. Algorithms are all about branching logic and taking that approach to do something like AI would generate levels of complexity impossible to manage. So neural networks and modelling are where it's at.

In general I think a bad mistake is the way in which AI is so often judged. As an example you get one case of a fatal accident in a fully automated car and it's all horror and litigation and skynet coming to get us. Whereas one fatal accident in a car controlled by a human driver is simply lost in the thousands that are just accepted as part of life. I truly struggle to understand this
 
one case of a fatal accident in a fully automated car and it's all horror
The way I see it you are implying that there has been only 1 case, even though that's not what you mean and that's certainly not what is happening. But that is an answer that AI could come up with. The reporting of self driving mishaps is not centrally recorded. Makes you wonder. Someday it will work when the cars are part of a program running the whole road, but as independent decision makers, they are just another advertising claim that doesn't work at the end of the day, except to sell cars. When people take their hands off the steering wheel on auto drive they are playing Russian roulette.

AI is simply a program that runs in a computer, without the computer there is no AI, it handles massive amounts of data, it just parallel processing, its still just a program that works on probability.
 
I think the difference is that AI can search and process external data sources rather than being given a data set.
The rocket engine above was made by searching the 'internet of everything' and designing from scratch rather than being given a pre existing blueprint.
The distinguishing AI feature is learning rather than simply executing.
The "AI" that exists does not do any of that.
Existing "AI" is a vast database of pre-processed disjointed "phrases" that it recombines in response to a prompt. If you ask about "the Sermon on the Mount" current "AI" systems doesn't read respond to your prompt by reading all the versions of the bible and then providing a reasoned analysis. "AI" searches it's database for the phrase "Sermon on the Mount" and then looks at all the other extended phrases in its database that includes the term and creates a coherent amalgam of those stored phrases.

Here is an excellent discussion on what AI is and what it isn't:

Here is an outtake:
ChatGPT is chatbot (a program designed to mimic human conversation) that uses a large language model (a giant model of probabilities of what words will appear and in what order). That large language model was produced through a giant text base (some 570GB, reportedly) though I can’t find that OpenAI has been transparent about what was and was not in that training base (though no part of that training data is post-2021, apparently). The program was then trained by human trainers who both gave the model a prompt and an appropriate output to that prompt (supervised fine tuning) or else had the model generate several responses to a prompt and then humans sorted those responses best to worst (the reward model). At each stage the model is refined (CGP Grey has a very accessible description of how this works) to produce results more in keeping with what the human trainers expect or desire. This last step is really important whenever anyone suggests that it would be trivial to train ChatGPT on a large new dataset; a lot of human intervention was in fact required to get these results.

It is crucial to note, however, what the data is that is being collected and refined in the training system here: it is purely information about how words appear in relation to each other. That is, how often words occur together, how closely, in what relative positions and so on. It is not, as we do, storing definitions or associations between those words and their real world referents, nor is it storing a perfect copy of the training material for future reference. ChatGPT does not sit atop a great library it can peer through at will; it has read every book in the library once and distilled the statistical relationships between the words in that library and then burned the library.

ChatGPT does not understand the logical correlations of these words or the actual things that the words (as symbols) signify (their ‘referents’). It does not know that water makes you wet, only that ‘water’ and ‘wet’ tend to appear together and humans sometimes say ‘water makes you wet’ (in that order) for reasons it does not and cannot understand.

In that sense, ChatGPT’s greatest limitation is that it doesn’t know anything about anything; it isn’t storing definitions of words or a sense of their meanings or connections to real world objects or facts to reference about them. ChatGPT is, in fact, incapable of knowing anything at all. The assumption so many people make is that when they ask ChatGPT a question, it ‘researches’ the answer the way we would, perhaps by checking Wikipedia for the relevant information. But ChatGPT doesn’t have ‘information’ in this sense; it has no discrete facts. To put it one way, ChatGPT does not and cannot know that “World War I started in 1914.” What it does know is that “World War I” “1914” and “start” (and its synonyms) tend to appear together in its training material, so when you ask, “when did WWI start?” it can give that answer. But it can also give absolutely nonsensical or blatantly wrong answers with exactly the same kind of confidence because the language model has no space for knowledge as we understand it; it merely has a model of the statistical relationships between how words appear in its training material.

In artificial intelligence studies, this habit of manufacturing false information gets called an “artificial hallucination,” but I’ll be frank I think this sort of terminology begs the question.4
 
One place where AI is unmatched by humans is the interpretation of visual data, such as images. A person looks at a picture to see what is in it. To get a more detailed look, they zoom in on a spot in the image. If you enlarge the image enough, you will have to look at it displayed on the side of a building. You can use a step ladder to look at it, or you can stick to the little screen and go square by square until the whole image has been thoroughly looked at. Could be days later.

So we use partial zoom, pick out the areas of highest interest, zoom in, and call it a day. AI blows up the image by simply looking at it pixel by pixel, at a very fast rate, with no image fatigue, looking for coincidences from previous scans of which positive identifications have been made. It can do this 24 hours a day, day in and day out.

Where it is far ahead of people is identifying subtle molecular changes that preclude the emergence of a new state. For example, finding evidence of minute changes that happen before a tumor can even start to form.

Ironically, because large amounts of data is needed to make AI predictions be accurate, this process needs collaboration, open team science, and knowledge sharing. In short, people talking to each other across company borders all the time, without having to worry about giving away economic secrets.

Anything that standardizes the data collection, makes it more efficient, that keeps bad data out of the system, makes the AI analysis that much better. People are an integral part of this process every step of the way. The AI deep learning process doesn't work unless people feeding in the data cooperate with each other on multiple levels. Currently most AI data is normally obtained by stealing it when no one is supposedly looking.

All of the people shaping, checking, standardizing the data is missing from the bulk of the AI set ups because that costs money. It could employ millions of people. Doing this the right way is as important as the Manhattan project was. As long as the money is pouring in, these exalted techno dubies use money to pave, plug, and bypass anything that could block them from cashing in big time tomorrow regardless of how dubious the results might seem to be. The old fashioned name for this which never goes out of style is penny wise and pound foolish.
 
One place where AI is unmatched by humans is the interpretation of visual data, such as images. A person looks at a picture to see what is in it. To get a more detailed look, they zoom in on a spot in the image. If you enlarge the image enough, you will have to look at it displayed on the side of a building. You can use a step ladder to look at it, or you can stick to the little screen and go square by square until the whole image has been thoroughly looked at. Could be days later.

So we use partial zoom, pick out the areas of highest interest, zoom in, and call it a day. AI blows up the image by simply looking at it pixel by pixel, at a very fast rate, with no image fatigue, looking for coincidences from previous scans of which positive identifications have been made. It can do this 24 hours a day, day in and day out.

Where it is far ahead of people is identifying subtle molecular changes that preclude the emergence of a new state. For example, finding evidence of minute changes that happen before a tumor can even start to form.

Ironically, because large amounts of data is needed to make AI predictions be accurate, this process needs collaboration, open team science, and knowledge sharing. In short, people talking to each other across company borders all the time, without having to worry about giving away economic secrets.

Anything that standardizes the data collection, makes it more efficient, that keeps bad data out of the system, makes the AI analysis that much better. People are an integral part of this process every step of the way. The AI deep learning process doesn't work unless people feeding in the data cooperate with each other on multiple levels. Currently most AI data is normally obtained by stealing it when no one is supposedly looking.

All of the people shaping, checking, standardizing the data is missing from the bulk of the AI set ups because that costs money. It could employ millions of people. Doing this the right way is as important as the Manhattan project was. As long as the money is pouring in, these exalted techno dubies use money to pave, plug, and bypass anything that could block them from cashing in big time tomorrow regardless of how dubious the results might seem to be. The old fashioned name for this which never goes out of style is penny wise and pound foolish.

But whereas AI sees a painting for its component parts, the human eye sees a work of art; beauty, imagination and vision.
 
One place where AI is unmatched by humans is the interpretation of visual data, such as images. A person looks at a picture to see what is in it. To get a more detailed look, they zoom in on a spot in the image. If you enlarge the image enough, you will have to look at it displayed on the side of a building. You can use a step ladder to look at it, or you can stick to the little screen and go square by square until the whole image has been thoroughly looked at. Could be days later.

So we use partial zoom, pick out the areas of highest interest, zoom in, and call it a day. AI blows up the image by simply looking at it pixel by pixel, at a very fast rate, with no image fatigue, looking for coincidences from previous scans of which positive identifications have been made. It can do this 24 hours a day, day in and day out.

Where it is far ahead of people is identifying subtle molecular changes that preclude the emergence of a new state. For example, finding evidence of minute changes that happen before a tumor can even start to form.

Ironically, because large amounts of data is needed to make AI predictions be accurate, this process needs collaboration, open team science, and knowledge sharing. In short, people talking to each other across company borders all the time, without having to worry about giving away economic secrets.

Anything that standardizes the data collection, makes it more efficient, that keeps bad data out of the system, makes the AI analysis that much better. People are an integral part of this process every step of the way. The AI deep learning process doesn't work unless people feeding in the data cooperate with each other on multiple levels. Currently most AI data is normally obtained by stealing it when no one is supposedly looking.

All of the people shaping, checking, standardizing the data is missing from the bulk of the AI set ups because that costs money. It could employ millions of people. Doing this the right way is as important as the Manhattan project was. As long as the money is pouring in, these exalted techno dubies use money to pave, plug, and bypass anything that could block them from cashing in big time tomorrow regardless of how dubious the results might seem to be. The old fashioned name for this which never goes out of style is penny wise and pound foolish.
Zoom and enhance!
 
One place where AI is unmatched by humans is the interpretation of visual data, such as images. A person looks at a picture to see what is in it. To get a more detailed look, they zoom in on a spot in the image. If you enlarge the image enough, you will have to look at it displayed on the side of a building. You can use a step ladder to look at it, or you can stick to the little screen and go square by square until the whole image has been thoroughly looked at. Could be days later.

So we use partial zoom, pick out the areas of highest interest, zoom in, and call it a day. AI blows up the image by simply looking at it pixel by pixel, at a very fast rate, with no image fatigue, looking for coincidences from previous scans of which positive identifications have been made. It can do this 24 hours a day, day in and day out.

Where it is far ahead of people is identifying subtle molecular changes that preclude the emergence of a new state. For example, finding evidence of minute changes that happen before a tumor can even start to form.

Ironically, because large amounts of data is needed to make AI predictions be accurate, this process needs collaboration, open team science, and knowledge sharing. In short, people talking to each other across company borders all the time, without having to worry about giving away economic secrets.

Anything that standardizes the data collection, makes it more efficient, that keeps bad data out of the system, makes the AI analysis that much better. People are an integral part of this process every step of the way. The AI deep learning process doesn't work unless people feeding in the data cooperate with each other on multiple levels. Currently most AI data is normally obtained by stealing it when no one is supposedly looking.

All of the people shaping, checking, standardizing the data is missing from the bulk of the AI set ups because that costs money. It could employ millions of people. Doing this the right way is as important as the Manhattan project was. As long as the money is pouring in, these exalted techno dubies use money to pave, plug, and bypass anything that could block them from cashing in big time tomorrow regardless of how dubious the results might seem to be. The old fashioned name for this which never goes out of style is penny wise and pound foolish.
Except bicycles and stop signs. It just cannot tell them apart.
...Or it can and we are all doomed.
 
But whereas AI sees a painting for its component parts, the human eye sees a work of art; beauty, imagination and vision.
Language is important and I do not believe I am being pedantic when I say that the human eye also sees only the component parts; it is the brain that interprets what it sees as art art, beauty etc. It would be quite trivial to teach an LLM the difference between beauty and ugly, at least in the perception of the trainer, so it could then 'see' beauty etc..

Also remember that the brain's interpretation is very heavily influenced by culture. Different cultures see beauty in different things. Not so long ago mountains and forests were seen as ugly, scary places that hid dragons and other evil creations. It is only in the last one or two hundred years that they have been perceived as beautiful (at least in the West), mainly due to the romantic movement starting towards the end of the C18th. So the identification of beauty is a very subjective thing. Eventually AI might grow to have an appreciation of beauty but it won't necessarily be the same as ours. Which makes in no less relevant.
 
Whereas one fatal accident in a car controlled by a human driver is simply lost in the thousands that are just accepted as part of life. I truly struggle to understand this
The major differences between a human controlled vehicle (or other use) and an AI controlled one are the ability to replicate the issue, the ability to understand why something went wrong, and the ability to put some remediation into place.

In an AI system, there is no observable algorithm and the relationship between the various inputs and the resulting outputs are unknown. In some cases, this relationship may be dynamic and change over time. It is often not possible recreate the initial conditions and generate the desired or anomalous result.

In standard programs, despite the overall complexity, it is straight forward to map from a certain beginning state to the result and this will be readily repeatable. It is a little more challenging, but humanly possible to map from a result back to possible input conditions that caused it. This is also done with human centered systems and it is quite common to have a review board evaluate an accident and determine root causes.

Once reasons for an accident or anomalous result are understood, there are concrete steps that can be put into place to prevent, or at least reduce the likelihood of, its recurrence. With AI based systems, the only solution is to try to retrain the system to recognize a specific input condition and react an a more desirable manner. Even then, the ability of the system to generalize and generate the desired result in similar but not identical situations is unknown. Furthermore, the number of specific trainings needed quickly grows.

Not knowing what an AI system might do under all circumstances and not being able to correct faulty results are reasons to treat AI systems differently that human based ones.
 
Once reasons for an accident or anomalous result are understood, there are concrete steps that can be put into place to prevent, or at least reduce the likelihood of, its recurrence. With AI based systems, the only solution is to try to retrain the system to recognize a specific input condition and react an a more desirable manner.
And yet the most common accident involving self driving cars is for a human driven car to rear end a stationary self driving car.
 
And yet the most common accident involving self driving cars is for a human driven car to rear end a stationary self driving car.
I doubt we will all suddenly rush out and buy self-driving cars, just as we aren't going to buy AI written books. However, just as we use grammar checkers and other AI to revise and edit, new cars are being built with sensors to automatically prevent such rear-ending. My automatic car with cruise control and lane control already leaves me with little to do. I expect that other incremental changes will eventually result in a fully self-driving car without anyone noticing that it had happened.
 

Similar threads


Back
Top