I don't think AI is necessarily so very far away from that.
Sorry for taking those brief remarks out of context, but they kind of illustrate my view of AI in 2024, which (arguably rather cynically) amounts to this:
AI - in the truest sense of the word - doesn't actually exist
Contentious, perhaps, but from my experience, it seems that:
"AI" is nothing more than a term marketing people use as they know you can add x% onto whatever you're charging for the service/system
There are a couple of relevant (hopefully!) points around this that may make this a bit clearer:
1. Artifical Intelligence - the Turing-esque definition - is a comprehensive system capable of complex, independent thought
It's a broad definition, but loosely (in my warped mind, at least
) is:
- Capable of advanced thought and reasoning across multiple dimensions/domains
- May be as good (if not better) at such things than your average human
Sure, you can code/develop something to fool a Turing test. It's been done before. Take the Deep Blue/Kasparov scenario, for example:
- There are a finite (albeit rather large) combination of moves possible at any point in a chess match
- Run simulations, calculate more steps ahead than a Grandmaster, and brute force the problem, and it's possible to work the probabilities so that - hours later, when the Human tires - the machine wins by virtue of consistent performance (i.e. unaffected by tiredness, cognitive effort/strain, and doesn't need to relax/sleep)
- However... if you ask Deep Blue what it thinks about, say, US politics, or who'll win the Premier League next season, will that response be more or less realistic than if you asked Garry Kasparov?
It's a very narrow use-case, is what I'm saying - far from a holistic system capable of advanced thought and decision.
and...
2. Working with ChatGPT/CoPilot in a Dev Environment...
In my last job, my boss wanted to see if tools like ChatGPT and CoPilot were any good, and whether they could make things easier for us at work, so we did some fooling around in a safe dev environment where we could test Python bulk-data-processing scripts (volumes of data that exceeded max limits of CSV files by a considerable amount
and required some
serious batch processing.
- We started by feeding functions and asking for docstrings as output
- The first problem was that ChatGPT wasn't up to the latest version of python, and so often failed to interpret the code properly
- It failed so often that it was virtually useless because you had to check every output (which took enough time it was quicker to code by hand)
- We next tried getting the AI to write compact functions by giving it clear specifications in simple, clear English
- It failed. Pretty much every time. Sometimes it looked like it might work, but digging deeper, the code was fundamentally flawed
- I/We queried the AI on how to code x or y - what's the python code for doing this or that.
- It was garbage. I've told the AI it's wrong 3+ times in a row for a very simple single-purpose function many times. It says it understands, says its made a mistake, then tells you it's learned and this time the code is absolutely, definitely right
- It sounds believable - that's the danger - but is still useless
So my experience has been that it's far, far quicker to google something or look up on Stack Overflow than even try using ChatGPT/CoPilot. But it looks/sounds incredibly convincing - even to veteran, cynical developers
. So, if the IT nerds can't quite be sure that AI works, how can a normal person (i.e. someone not kept in a darkened basement
) be expected to tell whether what the answers to any questions they ask are correct or even close to the true/correct answer?
True, my own tests weren't perfect, were biased, and were themselves fairly narrow use-cases, but...
In my view, they demonstrate that we're not working with true AI, just (relatively) sophisticated computer programs that substitute creative/original thought for brute-forcing problems mathematically. We
mostly (eventually) learn from mistakes and try something different, but AI doesn't seem able to do even that unless you tell it to (and not always then even).
And one final point...
Remember
Blade Runner? The Voigt-Kampff test is akin to the Turing Test. Replicants are AIs inside a body, right?
And despite all the tech, the only thing that can make that distinction between human/replicant is Deckard (Harrison Ford).
I remember the Ridley Scott film (late 70s? Early 80s?) and it was ahead of its time back then. But that, in turn, was based on Philip K. Dick's story
Do Androids Dream of Electric Sheep? which - I think - was cutting edge when that was published (maybe late 50s, early 60s?).
So, the concept's more than half a century old, right? And I'm not sure we've got that far, let alone further.
That being said, I'm an idiot: yesterday I tried writing
leet, and got the 1337 back to front, so humans still are far from perfect