Major advance in machine learning

Vertigo

Mad Mountain Man
Supporter
Joined
Jun 29, 2010
Messages
8,787
Location
Scottish Highlands
Google's DeepMind has made a major advance in AI machine learning. The previous version of their AlphaGo program could beat experts at Go but it was trained by 'studying' loads of humans playing the game. This new version has now soundly thrashed the earlier version, winning 100 out of 100 games, but this version was only taught the rules and then 'learnt' the tactics by playing itself.

Google DeepMind: AI becomes more alien
The AlphaGo program, devised by the tech giant's AI division, has already beaten two of the world's best players.

It had started by learning from thousands of games played by humans.

But the new AlphaGo Zero began with a blank Go board and no data apart from the rules, and then played itself.

Within 72 hours it was good enough to beat the original program by 100 games to zero.
 
I'd love to see this same be applied to chess.
Seems like it would take even less time to master chess this way.
I think the key thing here is that Go is a game with relatively simple rules but a lot of tactics, whereas chess has much more complex rules as well as the tactics so probably harder to do. However I wouldn't mind betting they are working on it.

It's really quite an exciting advance though possibly also a little scary!
 
How long before they plug the program into Wikipedia and have the program crunch through all the world's knowledge to come to the conclusion, "Human mostly threat to itself - EXTERMINATE!"

I always like to think of Marvin from Hitchhikers when people talk about computers coming to the conclusion of wanting to exterminate/control all human life. Mostly because people like the doom aspect but what if the computer simply did not care what so ever!
 
Mostly because people like the doom aspect but what if the computer simply did not care what so ever!

Which is far more likely, since caring would require emotional response, not just calculation.
 
I always like to think of Marvin from Hitchhikers when people talk about computers coming to the conclusion of wanting to exterminate/control all human life. Mostly because people like the doom aspect but what if the computer simply did not care what so ever!

Believing humanity would benefit from extinction wouldn't require an emotional outlook, IMO.
 
Go is a game with relatively simple rules but a lot of tactics

Um ... daft (rhetorical) question, but why would anyone try to develop AI whose first lesson was in tactics? Why not teach it to respond to more everyday problems? Either the researchers have an extremely limited view of what constitutes "intelligence", or else this is a military-funded project, and too many stories have already warned us where that leads!
 
It's more about problem solving. Most games are all about having a defined set of rules and then problem solving (winning) to get to the end result. That's essentially what they want from an AI - one which they can input parameters and data and which can then adapt and problem solve on its own.

By having simple rules they hope to study how the code works to build up layers in the problem solving aspects which can then be worked upon to allow the AI to problem solve for more and more rules and conditions.

The ultimate is a machine which can practically take vast bodies of data and process it quickly for solutions.
 
Um ... daft (rhetorical) question, but why would anyone try to develop AI whose first lesson was in tactics? Why not teach it to respond to more everyday problems? Either the researchers have an extremely limited view of what constitutes "intelligence", or else this is a military-funded project, and too many stories have already warned us where that leads!

I can't speak for AI researchers, but I think it's about being either deep or broad with the problem. With a game like chess or go, there is little ambiguity - there's a board, there's some pieces, there's a finite set of rules and an end state to get to. Hence the 'AI' is programmed to go really deep and try and figure out something highly specific. I'd question such an approaches worthiness, but perhaps there are general principles in these very narrow fields of endeavour that might be taken across to other problems.

If you go for 'everyday problems' then I think the depth is hampered by how broad the issue becomes. Everyday is messy and chaotic, albeit with patterns that we can see, but difficult for the programmers of current AI's to really get to grips with. For example - how does an AI interact with it's environment? Which would be some sort of a prerequisite for something to move about and interact in society. Can it handle everything that comes into it's vision/hearing/senses in the same way that a human can (and all the situations that a human might find itself in)? At the moment, no.
 
I can't speak for AI researchers, but I think it's about being either deep or broad with the problem. With a game like chess or go, there is little ambiguity - there's a board, there's some pieces, there's a finite set of rules and an end state to get to. Hence the 'AI' is programmed to go really deep and try and figure out something highly specific. I'd question such an approaches worthiness, but perhaps there are general principles in these very narrow fields of endeavour that might be taken across to other problems.

If you go for 'everyday problems' then I think the depth is hampered by how broad the issue becomes. Everyday is messy and chaotic, albeit with patterns that we can see, but difficult for the programmers of current AI's to really get to grips with. For example - how does an AI interact with it's environment? Which would be some sort of a prerequisite for something to move about and interact in society. Can it handle everything that comes into it's vision/hearing/senses in the same way that a human can (and all the situations that a human might find itself in)? At the moment, no.

I think the main goal is to get computers closer to being able to think like humans. We are illogical creatures, but we do possess extremely good methods of processing the information we receive, probably mostly from our neural plasticity. Nothing in our brain happens in a straight line, which results in multiple dimensions of thinking capacity. We remember scenes, smells, sights, sensations, and emotions, giving our memories extreme depth. And giving a computer the ability to process that is very difficult, because simulating that neural plasticity is quite challenging.

The main point in this case is that the computer program literally taught itself - a mile stone in programming.
 
Um ... daft (rhetorical) question, but why would anyone try to develop AI whose first lesson was in tactics? Why not teach it to respond to more everyday problems? Either the researchers have an extremely limited view of what constitutes "intelligence", or else this is a military-funded project, and too many stories have already warned us where that leads!
I can't better the previous answers! :D
The main point in this case is that the computer program literally taught itself - a mile stone in programming.
And I absolutely agree that that is the real milestone in this. As far as I'm aware all previous 'AI' attempts have been 'taught' by humans whereas this one has learnt on it's own by experimentation. And I think that concept of experimentation is critically important.
 
The other part is that they want AI that think rather than AI that just run every possible simulation of moves. The latter works and any thinking code has to have some element of prediction/simulation, however its very resource intensive to simulate every single possible move.

So they want an AI that thinks rather than just simulates, which as a result can be given much bigger bodies of data to work with and deliver a result within practical limits.

Of course alongside this they are always increasing the processing capacity of computers. So not only are they trying to make more efficient thinkers they are improving the capacity for those codes to run. One big advance on that front is quantum computers which understand more than just two states. At present machines understand binary - essentially a series of on/off situations. Allowing the code to understand three or more gives a massive gain in how the machine code can work that underlays programming languages.
 
And yet they still can't make an AI for the Civilization PC game that offers a challenge against a human player without cheating.

Barring a few super computer AI - pretty much every single computer game AI cheats.
On the first score home computers lack the power to let the AI actually see the game the same way the player does. So its not even seeing the same set of variables or controls in the same way the player is. So on that line the AI is cheating because its playing by a totally different set of inputs and controls.

On the second most game AI are not there to be human simulations; they are there to be a challenge to the player to play against. Their biggest downfall is that most are not highly adaptive (after a point) and as such repeat playing against them can show up the patterns in how they work - better AI hide this better; worse ones are far more obvious.

A third factor is mistaken cheating. A lot of players scream cheating with regard to AI and yet many times what's claimed to be cheating actually turns out to not be the case. Instead the AI often gain advantage in ways the players don't realise.


That said many RTS games the AI does have bonuses or just flat out doesn't have to pay for things. It's an annoyance because it means that resource strikes don't tend to work well against the AI like they would another player; and it also means that AI doesn't go for resource contesting as much - most RTS AI tend to just make a direct line from their base to the enemy base (ergo most time sthe player base). They are getting better - Blizzard has even opened up a huge roster of tools for access to its AI to further development.

Another final factor is that I think many computer game AI developers get very little actual time to develop it. I think it gets caught out and as such sometimes modders (with more free time) can often improve upon a games core AI somewhat.
 
Barring a few super computer AI - pretty much every single computer game AI cheats.
On the first score home computers lack the power to let the AI actually see the game the same way the player does. So its not even seeing the same set of variables or controls in the same way the player is. So on that line the AI is cheating because its playing by a totally different set of inputs and controls.

On the second most game AI are not there to be human simulations; they are there to be a challenge to the player to play against. Their biggest downfall is that most are not highly adaptive (after a point) and as such repeat playing against them can show up the patterns in how they work - better AI hide this better; worse ones are far more obvious.

A third factor is mistaken cheating. A lot of players scream cheating with regard to AI and yet many times what's claimed to be cheating actually turns out to not be the case. Instead the AI often gain advantage in ways the players don't realise.


That said many RTS games the AI does have bonuses or just flat out doesn't have to pay for things. It's an annoyance because it means that resource strikes don't tend to work well against the AI like they would another player; and it also means that AI doesn't go for resource contesting as much - most RTS AI tend to just make a direct line from their base to the enemy base (ergo most time sthe player base). They are getting better - Blizzard has even opened up a huge roster of tools for access to its AI to further development.

Another final factor is that I think many computer game AI developers get very little actual time to develop it. I think it gets caught out and as such sometimes modders (with more free time) can often improve upon a games core AI somewhat.
I think the key point here though is the simple fact (that you do mention) that a PC or games console simply doesn't have anything like the computing power being used by programs like this AlphaGo.
 
I think the key point here though is the simple fact (that you do mention) that a PC or games console simply doesn't have anything like the computing power being used by programs like this AlphaGo.

The increase in hardware and software design may surprise (the increasingly more powerful AlphaGo programs went from cloud distributed systems with 1000s of separate GPUs to a single machine with only 4 TPUs) However right now, such a system would, yes, be exorbitant! :D

But, after all, it is now possible to have a chess program that can beat any human player on a smart phone now - remember when they needed super computers to do it?

Who knows what will be sitting as my PC in twenty years? (When I was in research twenty years ago we had brand new shiny DEC Alphas delivered and we were over the moon. Now I have an 4 core processor clocked at 3.6GHz each - that's each core individually ten times faster than the original DEC Alphas. Mainly playing Fallout4 at the moment, but I'm also crunching huge amounts of numbers for loads of other projects :p)
 

Similar threads


Back
Top