Would you like to react to this message? Create an account in a few clicks or log in to continue.

Chess2uLog in

descriptionAlphaGo - GO Computer Defeats European Go Champion Fan Hui  EmptyAlphaGo - GO Computer Defeats European Go Champion Fan Hui

This is the Most Historic moment in the History of A.I Development and even more so if AlphaGo Defeats the World Human GO Champion below...

March 9th-15th 2016, AlphaGo will face its ultimate challenge: a 5-game challenge match in Seoul against the legendary Lee Sedol, the top Go player in the world over the past decade.

But I have to say (I don't know why it has taken A.I so long to do it ?) I had similar ideas more than (30+) years ago on how to do it!
Being in the wrong place at the wrong time in My life stopped me doing many things including Beating a World class Human GO player,using a Computer Program.
I have always believed Chess is far More complicated than GO and that GO is just a larger version of Noughts & Crosses,Experts ? Quote the reason is that for Chess a typical move involves (20) possible Moves where as for GO it is (200) possible Moves,But what they are forgetting and not understanding is that the (20) possible Moves for Chess are more complicated! just look at the different ways the Pieces can move,where as for GO the moves are along simple lines within simple space areas,where a player has to surround the opponents Pieces Etc.
OK I understand GO is played in a much larger Area (19 x 19) compared to the Chess Area (8 x 8) But the size of the Area can be reduced with the same type of Calculations, as with a Small version of Noughts & Crosses (Normal) (3 x 3) if it was played on a Board Area of (9 x 9) or (19 x 19) Etc. the Main difference trying to do it with Noughts & Crosses on a large Board (9 x 9)+ is that it may be impossible to make a winning line :)
I have made Noughts & Crosses (Normal) (3 x 3) Computer programs that can play the game perfectly! so sizing it up to play a larger version is not so difficult,just a matter of Hardware Speed! and the same can be applied to GO,the Trick for GO which AlphaGo has done and I would have done :( is to Branch the Tree search off so that it does not try to search so Deep!,just enough to be stronger than the best Human players.

So there we have it! The Historic moment has happened! and BIG! Congratulations to the AlphaGo Team for doing it first!!
This has inspired me to get going with a long over due project of mine I have been putting off,and that is (The A.R.B GO System) :) following on from the Success of (The A.R.B Chess System) where I have Beat the Very Best Chess Computer programs,so look out AlphaGo it's only just Starting :) You may Beat the best Normal Humans But ???
I have had the idea for some time regarding the (The A.R.B GO System) check out for my first YouTube Video for more information,and see My prediction for the (March 9th-15th 2016, AlphaGo v Lee Sedol Match)

Just like to mention... I may Challenge the AlphaGo Team first with my other System (The A.R.B Rock-Paper-Scissors System) :) let's see if they are up for it ?


descriptionAlphaGo - GO Computer Defeats European Go Champion Fan Hui  EmptyRe: AlphaGo - GO Computer Defeats European Go Champion Fan Hui

Synopsis of Myungwan Kim's analysis of Fan Hui vs AlphaGo Match X-post from r/MachineLearning (self.baduk)
submitted 9 days ago by NFB42

Hi there. I've been having a discussion over on /r/hearthstone with /u/yetipirate about Computer Go. For his sake I made a synopsis of Myungwan Kim's analysis video. I think it's not as relevant here since anyone here can just watch the video themselves. But /u/mkdz suggested I post it here as well, so I figured why not. I'll try and cut out some of the real non-go player explanations, but just keep in mind I wrote it to make the analysis more digestible for non-Go players.
In General:
The match has been big news in East-Asia as well. The thing which most shocked all the professionals was that AlphaGo played so much like a human player. Their first impressions were that it's as if this was a human playing, not a computer.
Since how a human plays is, obviously, pretty well known, they decided that they'll focus commentary mostly on those cases where AlphaGo doesn't play like a human.
The first thing that Myungwan Kim noted was that AlphaGo has a Japanese playstyle. The commentators don't know, but they suspect it is that the original human data set was biased towards Japanese playstyles.
The Games:
In the first game both sides played very passively in the opening. Leisurely and gentle they say.
Myungwan Kim finds that AlphaGo has a weakness here, it doesn't seem to understand the value of sente. Myungwan Kim says because of this that the first game Fan Hui was winning in the opening. He says this was the only game Fan Hui was winning after the opening. He estimates Fan Hui was about 10 points ahead, and can't see white getting back even 5 points coming out of that opening. Myungwan Kim offers some alternate moves for AlphaGo which would still have Fan Hui in the lead, but would've given AlphaGo better opportunities to comeback.
Myungwan Kim later points to one huge mistake by Fan Hui in the midgame that lost him the game.
Final conclusion from game one: Aside from not understanding sente and gote. Myungwan Kim says AlphaGo betrays itself as a computer in that it sometimes it goes too far in mimicking standard professional play and does the most common move instead of the most optimal move. In other words, it's extremely book smart, but at times fails to notice when it should be ignoring the books because the specific situation in the game makes the less standard move the most optimal one instead. (A bit cliche imo, but Myungwan Kim says "AlphaGo is not creative".) They think that might really hurt AlphaGo in the game against Lee Sedol.
Game 2, they note Fan Hui really played too aggressively, as he noted in his own post-match interview. Myungwan Kim says he can really see Fan Hui wasn't playing his best game, but was trying to test AlphaGo to see if it could be tricked into making exploitable mistakes.
Myungwan Kim says Fan Hui actually put up a really good fight. After the opening it should've been over for Fan Hui, but AlphaGo almost allowed Fan Hui to get back in the game.
Game 3 is similar to the fifth game, though Fan Hui played better in the beginning here. Myungwan Kim notes several moves by AlphaGo which are top professional moves. He notes some moves by Fan Hui which he thinks hints that Fan Hui might be a bit out of practice when it comes to playing professional level games (he says it's the kind of move you do if too used to playing teaching games against amateurs). Fan Hui lost because he played over-aggressive and left too many holes in his defence as a result.
On the fifth game, Myungwan Kim says AlphaGo was winning from the beginning here. They marvel at AlphaGo's decision to enter a Ko fight, but they're not sure whether AlphaGo really knew what it was doing or if it just got 'lucky' that the Ko worked in its favour.
Myungwan Kim points out AlphaGo made a huge mistake early in this game, but was saved because not long after Fan Hui made an equally huge mistake. But this is an example where he thinks a real grandmaster like Lee Sedol would not have allowed AlphaGo to get away with the kind of mistake it made there.
AlphaGo's Strengths and Weaknesses:
Myungwan Kim lists AlphaGo's strengths:
It's not afraid of Ko.
Reading might be AlphaGo's strength.
Myungwan Kim lists AlphaGo's weaknesses:
Doesn't understand sente and gote, as explained earlier.
At times too obsessed with following common patterns, when the specific situation might require creative deviation from those patterns. Also explained earlier.
It doesn't understand Aji.
Myungwan Kim thinks AlphaGo has difficulty, or even doesn't at all, evaluating the value of specific stones. It's good at making moves which directly gain territory for itself, but tends to miss moves which reduce the value of the opponent's stones.
It can make really high level moves at times, but it doesn't understand those moves. Which it displays by making the right moves at the wrong time.
More generally Myungwan Kim thinks a weakness of AlphaGo is its insularity. He really stresses that human pro's become much stronger when they discuss and analyse their games with other pro's. And because AlphaGo primarily plays against itself the quality of the feedback it gets on its play is too one-note, which leaves holes in its plays whereas human pro's getting feedback from many other human pro's end up with more robust and stronger playstyles. He really thinks to progress past its current level AlphaGo needs to play more with top human pro's rather than just itself. Right now, Myungwan Kim en most pro's he knows don't feel threatened by AlphaGo. They also talk about how AlphaGo can be useful for human pro's to study and become stronger, which can make AlphaGo stronger in turn. (This last paragraph is imo all just Myungwan Kim musing based on his understanding of how AlphaGo was designed more than evaluating its plays themselves, so that's why I didn't list it as a bullet point.)
In general, I get the sense from Myungwan Kim's explanations that he thinks AlphaGo is stronger at the more concrete parts of Go play, such as territory and life-or-death, and weaker at the more vague concepts, such as influence and uncertainty.
Upcoming Match Against Lee Sedol:
Myungwan Kim says that with all his respect to the Google team, he thinks AlphaGo as it played against Fan Hui will have no chance against Lee Sedol. He says all pro's who've looked at these games generally agree that AlphaGo would need a one or two stone handicap against Lee Sedol.
Myungwan Kim actually says he thinks AlphaGo when it faces off against Lee Sedol might be as strong as he is. In which case he still predicts Lee Sedol will win every game.
They think that Lee Sedol won't make the mistake of playing overly aggressive like Fan Hui did.
They feel Google is overplaying their hand somewhat challenging Lee Sedol this quickly. They say it's an amazing accomplishment to get an AI this strong, but as is it's not yet at grandmaster level. So by moving the goalpost from beating a low-level pro to beating a grandmaster this quickly they're somewhat cheapening their own accomplishment in beating Fan Hui.
My thoughts:
What I find interesting is what seems, to my layman's understanding, like a polar opposition between AlphaGo and Deep Blue. Deep Blue worked, afaik, because it could read much further ahead than any human chess player. But AlphaGo actually has as its weakness that it doesn't read ahead that well at all. Deep Blue worked by being a superhuman calculator. AlphaGo works by having a kind of superhuman intuition about what are good and what are bad moves, but it still makes mistakes because it doesn't really understand why a move is good or bad.
It'll be very interesting whether the program as is can ever compensate for that flaw. Since I don't understand anything about the program my first guess would just be yes, but if it turns out to be no, that would actually be even more fascinating. From a Go perspective and from a programming perspective, I'd wager. :)

descriptionAlphaGo - GO Computer Defeats European Go Champion Fan Hui  EmptyRe: AlphaGo - GO Computer Defeats European Go Champion Fan Hui


descriptionAlphaGo - GO Computer Defeats European Go Champion Fan Hui  EmptyRe: AlphaGo - GO Computer Defeats European Go Champion Fan Hui

Permissions in this forum:
You cannot reply to topics in this forum