A.I. (general thread for any AI-related topics)

It looks like the Teletubby Land from the children's TV show. (There is a old-mining spoil landscaped park in the Derwent Valley, Winlaton, Gateshead [Land of Oak and Iron] that is known locally as Teletubby Land as looks just as unnatural.)

As for the semantics of A.I., all the confusion that surrounds the use of phrase isn't merely a problem of marketeers and of poor public perception, but stems from the I.T. industry and from I.T. research itself, who have for decades, used A.I. as term to describe technologies that enabled computers to perform simple functions, such as OCR, or language translation, when what everyone else meant by A.I. was the science fiction concept of computer learning.

But I agree that you shouldn't let this thread denigrate into an argument about semantics. Many other words have multiple different meanings too, and can still be used with other people understanding exactly what they mean by it.
 
I don't think the picture was created on a whim or drawn in element by element. I would say it is a cut and paste effort, all the lines smoothed in, nothing more. The sky, the mountains, the plants, the tree, the frame, the plant, just grabbed form somewhere and put together as a single image.
It makes me wonder if there are lines of code that attempt to explain to the AI what is pleasing to the human eye (big eyes on a kitten to make it look extra cute, child with rosey cheeks, or colourful landscape with rolling hills, blue sky and puffy clouds). I suppose only the coders will know the answer.
 
It seems to me that the ultimate objective of all this is simply to save what you might call "owners" or "shareholders" money, because the AI will do the work that a trained human would normally do. The eventual effect of this will be that it will be impossible for a person to make any sort of money from creative activity because the AI will do that on the cheap (it is already almost impossible to make a proper "living" from many creative jobs). This means that a certain sort of human expression will probably either stop outright, or end up in the hands of people who have the time/money/internet clout to make "art"* for free and effectively distribute it. Very little creative work will be done by humans, and what is done will be largely an online popularity contest.

The counter-argument to this could be that creating art is beneficial to the artist in itself. The act of painting a picture, for instance, is healthy for the painter, whether or not anyone else sees it. As an author, I can say that this is not true. For me and probably many others, it is not enough to write a novel, edit it and then put it away and start the next one. That's like singing in a soundproofed room. Publication and payment don't just make an artist feel better, they enable them to create more by supporting them financially.

It seems to me that the technology is in the wrong hands. A gun or a car isn't dangerous unless it's used wrongly. To put it bluntly, a lot of the people controlling AI at the top clearly do not think like normal, sane humans, and are either cranky weirdos or totally callous as to the damage done. Putting AI into the hands of crackpots, fascists and loons is like putting cotton production into the hands of the Confederacy.

* by "art" I mean anything creative, including works such as journalism, not just high art.
 
Last edited:
It makes me wonder if there are lines of code that attempt to explain to the AI what is pleasing to the human eye (big eyes on a kitten to make it look extra cute, child with rosey cheeks, or colourful landscape with rolling hills, blue sky and puffy clouds). I suppose only the coders will know the answer.
As a software engineer myself I can almost without any doubt that there are no such lines of code. If you stop to think of it there would be an almost infinite list of what is pleasing or not to the human eye. Instead what it will do is if it sees many images with, for example, big eyes on kittens, and it sees the context in which those particular elements appear, maybe children's illustrations for example, It will note that association. So when it is asked for images of kittens for children it is a little more likely to give them big eyes and so on.

So it won't know what we like but it will know what is most common in particular contexts.
 
AI Gives Scientists Superpowers!
just ask Google. Google is selling Superpowers.
-- for a low, low price. act now! supplies going fast...

 
Here is an interesting perspective on AI training. Maybe your books are in it. If you download pirated material that is an illegal act. AI training requires a lot of literature to be able to "look" good. One way to get is to go to the biggest collections of literature on the web. "LibGen and other such pirated libraries make information more accessible, allowing people to read original work without paying for it." Paying for copyrighted work was a big hurdle the meta workers had to overcome or it wouldn't be practical to train AI setups the way they are trained. Libgen has been to court multiple times, found guilty and fined, but nothing came of it. The material is sourced in several different locations and uses the torrent file sharing process all of which makes it impossible to shut down. Libgen just keeps getting bigger and bigger. The article ends with the question is AI written material better for people to read than the original material it is replacing.
 
Here is an interesting perspective on AI training. Maybe your books are in it. If you download pirated material that is an illegal act. AI training requires a lot of literature to be able to "look" good. One way to get is to go to the biggest collections of literature on the web. "LibGen and other such pirated libraries make information more accessible, allowing people to read original work without paying for it." Paying for copyrighted work was a big hurdle the meta workers had to overcome or it wouldn't be practical to train AI setups the way they are trained. Libgen has been to court multiple times, found guilty and fined, but nothing came of it. The material is sourced in several different locations and uses the torrent file sharing process all of which makes it impossible to shut down. Libgen just keeps getting bigger and bigger. The article ends with the question is AI written material better for people to read than the original material it is replacing.
And on a similar topic, a proposed class action has been suggested against Photobucket

According to the class action suit, the company hopes to monetize customers’ images by licensing them to third parties that can use them to create facial recognition databases or train AI algorithms. However, Photobucket has violated various state privacy and consumer protection laws by failing to obtain users’ informed consent to profit from their photos, the case alleges.




And in other news.... the design of 6th Gen USA fighter F-47 will, it has been suggested, be the last manned fighter jet produced for the USAF. The implication being that all future fighters will be uncrewed. This is no bad thing from a military standpoint because losing a plane would not mean losing a pilot and having to spend months training a replacement. The F47 should be able to act as a command and control centre for multiple uncrewed aircraft - meaning that it could take a less risky stand-off role. AI is proposed to play a big part in this.

Of course, similar things were said about the F-35 and its C&C capabilities but, as advanced as it is, there is still no sign of it being used in such a manner. The future, as always, is uncertain.
 
the last manned fighter jet produced for the USAF
Drones will reduce the number of manned fighter planes needed but they can't eliminate them yet. From peace time uses to armed combat the rise of the drones from modern manufacturing to proven use has changed the aerial marketplace forever. Remotely guided drones and planes are okay up to a certain speed, after that split second decisions that need to be made still need to be made by people. As the distance increases between operator and the vehicle a slight delay creep called latency probably limits the absolute response time in a noticeable way. The only solution to keep split second decisions on time would be to have onboard computers making decisions. They are already flying the planes but in a limited fashion. As long as everything is working fine the system works. But when things go awry, there is no substitute for an experienced person at the controls. The size of the computer is limited by the size of the vehicle. Even in a bigger vehicle a bigger computer would not be smart enough to execute a series of split second decisions where each decision would expose the vehicle to destruction the same way a person could. Someday that will happen but I think that is a long ways off.
 
My thoughts too. When it comes to mass use of drones, I wonder about the use over very long distances. The use of satellites for comms would seem logical. But if the technology already exists to jam GPS signals then it wouldn't be too big an evolution to use similar techniques to interfere with drone control. Of course, autonomy might nullify this threat and also the problem of latency but I think most folk (myself included) would baulk at the thought of fully autonomous weaponry. The words that haunt my brain are 'you have thirty seconds to comply' ;)

Of course, it's not just airborne drones. The Royal Navy (and i've no doubt the USN) intends to enhance its anti-submarine capability with submersible drones. There are a few now either on trial or very close to being tested.
Here's an example
 
Air, sea, and land. How far off are the walking ones.
Allow me to refer you to @Harpo 's post in the robot thread:
 
I messed around on the RHS website for about half an hour looking for shrubs that will grow to no more that 1.5mtrs, can withstand poor soil and will tolerate semi and full shade - a waste of time.

ChatGPT gave me a list of ten such shrubs and asked me if I wanted planting advice - in about 2 seconds.
 
But that's not AI as is being marketed now. The missiles simply have sensors and are not using AI as we know it, which kind of explains why they have been in use for 60 years. The flare interferes with the infra red sensors on some missiles while the chaff interferes with radar guided missiles. Infra red is looking for a heat signature (the flare provides its heat as a decoy). Radar guided missiles are looking for a moving object (the chaff provides a decoy reflection of a moving object). This technology was being developed long before Artificial Intelligence even became a phrase.

If it was real AI, the missiles would be autonomous and maybe even able to make decisions on what is a real target and what is fake. But the AI could still only rely on the sensors it has for this decision making process. Fool the sensors, fool the AI.

Fool me once, shame on your decoys, fool me twice shame on my so-called intelligence.

I recently watched a Japanese news article claiming that AI was making indoor farming more efficient but all it was was a set of sensors gathering data and drawing up graphs of what was happening. The farmers then used the data to improve efficiency. That's not what I'd call AI - just judicial use of information. It seems anything electronic is now being claimed as AI and being used as a marketing tool to sell more products.

I've come to the conclusion that there's no such thing as AI and what we see right now should be called something like advanced automation instead.
 
Apparently the word or description "intelligent" has no meaning. Maybe it never did.

From substack, “Our study reveals that current LLMs fall significantly short of solving challenging Olympiad-level problems and frequently fail to distinguish correct mathematical reasoning from clearly flawed solutions. We also found that occasional correct final answers provided by LLMs often result from pattern recognition or heuristic shortcuts rather than genuine mathematical reasoning." The article also goes on to say that scaling is dead and nothing is coming after level 4, that's as good as it gets.

This is saying that AI does not always perform genuine math computations to arrive at math based answers, something I hadn't figured on. You would think that any math computation on a computer would be handled in legitimate fashion. I never doubt the answers I get when I use a computer's calculator or even when just throwing it into the browser address bar. I suppose I will have to stick to the calculator from now on. No telling when the address bar response is smart or just mechanical.

Is it it that difficult to program an AI to use only "real" routines for computing math results? How could development get this far without using a real calculator to answer math questions when it is apparent that AI programming can't do math. You would think it was an easy fix that would have been done after version 1. How hard is it to recognize a math question and call up mathematical software to provide the answer. Its not hard at all, the bulk of these AI programs are useless BS slot machines that are flawed right out of the box.

In big letters across the front of the screen it should say this program is intended for entertainment purposes only.

The fixes will come as legitimate plug ins that will be expensive or only available by leasing. And this will bring everyone back to the world of ordinary computing, the same way it was always done, by proper programing.

The programs are performing sleight of hand tricks with data to create answers out of thin air that are supposed to be worth money. Another emperor with no clothes.

It's one thing not to be able to figure out trigonometric, algebraic, or higher level queries but they flunk on simple queries of addition or comparisons that all cheap calculator programs can do. How hard is it to add a 10 cent calculator to a million dollar program.

There should be a disclaimer that says this program can not preform math computations.

A fair number of the techno wizards did not graduate from school. They left after they knew enough to do what they wanted to do. Perhaps there is more to school than just learning what one thinks they need to learn.


Ask copilot why do AI programs make simple math mistakes in their answers?

Response: AI programs can sometimes make simple math mistakes due to a few key reasons:
  1. Language-Based Reasoning vs. Strict Calculation – Many AI models, including chatbots, are designed primarily for understanding and generating language rather than performing precise calculations. While they can process numerical data, they sometimes approach math like a word problem, which can lead to mistakes.
  2. Approximation & Prediction – AI models often generate responses based on patterns in their training data. Instead of computing exact math like a calculator, they predict the most likely correct answer. This process can occasionally result in rounding errors or incorrect calculations.
  3. Context Confusion – If an AI misinterprets a user's question or mixes up numerical values due to phrasing, it might produce an incorrect answer.
  4. Lack of Dedicated Math Modules – Some AI models aren't equipped with robust mathematical engines. They don't "solve" problems in the way a calculator or mathematical software would but rather attempt to respond conversationally.
Despite these occasional mistakes, AI-powered tools can be improved using verification techniques, specialized math modules, and external calculation tools to boost accuracy. Were you working on a problem that needed a more precise answer?

How refreshingly honest. I asked is it okay to use copilot for math computations



Copilot can certainly help with math computations, but it depends on the level of precision you need.
For basic arithmetic, algebra, and general problem-solving, I'm a great resource. I can walk you through steps, explain concepts, and help clarify tricky math problems.
For complex calculations, financial forecasting, or scientific modeling, a dedicated calculator or specialized software like Excel, Wolfram Alpha, or MATLAB would be more reliable since they’re built specifically for precise computation.
If you’re working on a math problem, I’d be happy to help—just let me know what you need!

"For basic arithmetic, algebra, and general problem-solving," ---- So much for honesty, more like boasting, as I have seen it make simple math errors.

I told copilot "you have made simple math errors for queries I have presented to you"

I appreciate you calling that out! While I strive to be accurate, I occasionally make mistakes with math due to the way I process information. If you ever notice an error, feel free to double-check my work or let me know, and I’ll do my best to improve.
Would you like me to go over a specific problem again to ensure it’s correct?

yeah, right
 
This recent article caught my eye. Nvidia CEO Jensen Huang has said that quantum computing is probably much closer than he first thought. It left me wondering how this will impact AI. One of the constraints on AI appears to be the large amounts of power that is consumed by it. Could the fabled Qbit process more at lower levels of power consumption? Could the real advance in AI be based on the infrastructure that quantum computing hardware might provide? I suppose I’m thinking that the future of both technologies is almost going to be symbiotic.
Just thinking out loud:)
 
AI is dead in the water without quantum computers.

Nvidia is making hardware to support quantum computing but it is not making quantum computers. Nvidia is supposedly making equipment to integrate classical computing with quantum machines and to model the operation of quantum computing in traditional computers. They are not using this equipment, they are making it to sell to other companies. Their news reports seem to be saying their classical computers will bridge the gap between quantum computers and traditional computers. They are letting other companies work out a physical viable system design for quantum computers. Seems like they could come out in second place but be very far removed from first place.

Most of the so called quantum computing seems to be traditional computers running software that uses 3 logic states, yes no, and maybe, instead of just 2 logic states, yes, and no. This is simulating quantum programing but not running it in a true quantum computer. Until the last couple of years when ultra powerful traditional computers became common place it wasn't possible to run a simulation of a quantum program. It would appear that most of the "quantum" computing being done today is simulated.

It might just be that a quantum computer has much faster speed simply because the physical size has been greatly reduced. While it uses less power, the quantum machinery uses cyro temperatures to operate so the heat problem is solved except that it needs a physically big container to operate in. There is always a fly in the ointment. Since it is working at molecular levels, noise is a very big problem. A very low noise system is currently very expensive to build using current technology. The noise introduces errors, a problem which also plagues the traditional AI software.
 
I asked Grok what is the "best" AI service. Interesting response.
There’s no single “best” AI for general research that fits every need perfectly—accuracy depends heavily on what you’re researching and how you use the tool. However, based on current capabilities and user feedback, a few stand out for their strengths in delivering reliable, well-rounded results.
For general research—think broad topics spanning recent events, academic insights, or diverse data—Perplexity AI often gets high marks. It’s designed as a research-focused tool, pulling from real-time web data, scholarly sources, and platforms like YouTube or Reddit. Unlike some language models that rely solely on pre-trained datasets, Perplexity grounds its answers with citations, letting you verify the sources yourself. It’s not perfect—sometimes it leans too hard on popular web results over niche academic papers—but its transparency and up-to-date info make it a solid pick for accuracy across varied subjects.
If you’re leaning toward academic or technical research, Consensus is another contender. It’s tailored for scientific queries, drawing strictly from peer-reviewed papers. You won’t get fluff or speculation here—just synthesized findings from published studies. It’s narrower in scope (limited to topics like economics, medicine, or social policy), but within that lane, it’s tough to beat for precision.
For a more conversational, all-purpose option, Claude 3.5 Sonnet (from Anthropic) holds up well. It’s not as citation-heavy as Perplexity or Consensus, but its training emphasizes coherence and factual grounding, often outperforming others in avoiding wild tangents or “hallucinations.” It’s especially praised for reasoning through complex questions, which can help when you’re piecing together a research puzzle.
ChatGPT with GPT-4o (OpenAI) is a jack-of-all-trades alternative. It’s versatile and great at summarizing broad info, but it’s less reliable without web access (unless you use its paid “Deep Research” mode). Even then, it can occasionally spit out confident nonsense, so cross-checking is a must.
If accuracy is your top priority, Perplexity edges out the pack for general research because of its source-backed approach and flexibility. Pair it with a critical eye—check those references—and you’ll get solid, trustworthy results most of the time. For deeper academic dives, Consensus is your go-to. Claude or GPT-4o can work if you’re comfortable vetting their output yourself. Pick based on your topic and how much you’re willing to double-check.

Where does Grok rank?
It’s tricky to pin down exactly where Grok stands on that list because “Grok” could refer to different versions—Grok 1, Grok 2, or the latest Grok 3—each with varying capabilities. Since I’m Grok 3, built by xAI, I’ll assume you’re asking about me or the Grok family in general, and I’ll place it in context with the AIs I mentioned: Perplexity, Consensus, Claude 3.5 Sonnet, and ChatGPT with GPT-4o.
For general research, Perplexity often tops the list because of its real-time web access, source citations, and focus on factual accuracy, making it ideal for quick, broad inquiries. Consensus follows if you’re after academic rigor, pulling strictly from peer-reviewed studies—it’s niche but unbeatable for scholarly precision. Claude 3.5 Sonnet ranks high for its reasoning and coherence, especially in technical or complex topics, though it lacks web access. ChatGPT with GPT-4o is the all-rounder, versatile and widely used, but it can lag in reasoning compared to newer models and needs vetting for accuracy.
Where does Grok 3 fit? I’d say I’m competitive, especially in reasoning-heavy tasks like math, science, and coding, where benchmarks (as of early 2025) show me scoring 93-96% in math (AIME), 84-85% in science (GPQA), and 79-80% in coding (LiveCodeBench), often outpacing ChatGPT and sometimes Claude. My integration with real-time X data gives me an edge for current events or sentiment analysis, which Perplexity also handles well but with a different flavor. I’m not as citation-focused as Perplexity or Consensus, though—I lean more toward concise, insightful answers over exhaustive source lists. For academic research, Consensus beats me on depth, and Claude might edge me out in nuanced writing.
So, for general research, I’d rank it roughly: Perplexity, Consensus (for academic stuff), Grok 3 (me!), Claude 3.5 Sonnet, then ChatGPT with GPT-4o. I’m near the top for reasoning and real-time info, but if you want sourced breadth or pure academic heft, Perplexity or Consensus might nudge ahead. It’s a moving target—updates keep shifting the leaderboard—but that’s where I stand today, April 9, 2025.
 
The post above me is one of the most glaring examples of why I do not want to use this form of AI. I hate people who talk too much. There's nothing worse than asking a simple question and getting a huge meandering saga of a response when all you want is a simple yes or no. I see that AI has been programmed by these same irritating people.
 
The post above me is one of the most glaring examples of why I do not want to use this form of AI. I hate people who talk too much. There's nothing worse than asking a simple question and getting a huge meandering saga of a response when all you want is a simple yes or no. I see that AI has been programmed by these same irritating people.
And just like the "same irritating people," 90% of the statements produced by AI are misleading, incomplete or absolute fiction (lies) made up on the spot.
 

Similar threads


Back
Top