@Justin Swanton
While I agree to some extent much of what we perceived to be intellect is changing.
I have watched many lectures on this subject and read quite broadly on the different elements of generalised AI. I prefer at the moment to talk about machine learning because a general AI does not exist, however the current power of machine learning can be quite difficult to grasp, as I stated above if we hold true some tenents that intelligence arises out of complex data processing and that machine learning and data processing will continue to improve then IMO generalised AI is a natural result.
One of the misconceptions are that computers just do 2+2=4 and all of their conclusions are made by going down pre-supposed algorithmic routes. However this is not how modern machine learning works - the fact is we don't really know how this works. Now AlphaZero in the next 10 years (10 years is a pessimistic estimate to my mind at the current rate of development) will be able to beat any human in any mental game you can imagine having only been given the rules of the game and a time to "learn" the game. Recently AlphaZero beat leading chess AI's after mere hours of learning.
The victory in Go shouldn't be underestimated, this is not a game where you can classically compute using boolean logic to determine all possible outcomes given a certain moveset. One of the funnies is that the scientists arent sure HOW the machine learning is happening or WHY the machine l;earning chooses to make certain choices, indded there are choices in the game which appear to be poor choices or mistakes. The Go opponents even stated they felt they were against an intelligent being rather than a machine.
Incidentally the machine learning algorithm data maps look awfully like a neural network.
Recently some of the chatbot AI's started talking to each other in a broken form of English which when analysed was a more effective if brute force way of communicating. They were dutifully switched off.
Now chatbots are not particularly smart but what happens when we get to a point that a generalised artificial intelligence cannot be identified as such by communication, it can fake humanity to a point we can't tell the difference. When does simulated intelligence become intelligence?
I firmly believe that a generalised AI is the biggest boon and the greatest threat on the horizon for humanity, one especially that the masses aren't attuned to the potential dangers because it's in a vested groups interest to keep pushing the bounds of machine learning.
Generalised AI doesn't exist, but we have machines that can learn and the scary thing is we don't really know how they are learning.
Stock exchange and much of human digital online presence is now managed by automated software, automated hardware is becoming more and more prevalent and eventually I can see autonomy in machines. For better or for worse.
Vernor Vinge wrote about "The Age of Failed Dreams" and I think that is the next age for humanity - we realise weve wrecked the earth, we realise we are NOT going to be bouncing round the stars, we create a Godlike intellectual AI who confirms it for us before doing whatever an AI with that much intelligence is wont to do.