Svidler beats the Fed

Going into this round Fed was the sole leader with seven points. Svidler was one point behind and his buddy Vitiugov even had a chance to take the lead if he beats Dubov and Svidler wins. So basically Svidler had every reason to play a complicated game. The only problem is that this is incompatible with his free-roll style. Playing unclear positions where he doesn’t have a forced draw in his back-pocket is simply not part of his repertoire. Obviously that’s only half of the story, because the Fed self-destructs in this game for absolutely no reason. Svidler didn’t win the game, Fedoseev lost it. Maybe losing in rounds 6 and 7 had something to do with this.

Even more on AlphaZero

Apparently AlphaZero has a predecessor named Giraffe. The developer Matthew Lai now works for DeepMind. Surprise?

The paper on Giraffe explains everything in much greater detail. For instance it took 72 hours to calibrate the net on a workstation with 2×10-core Intel Xeon E5-2660 v2 CPU. If we use a rough shortcut and divide 72 by 4, then Google’s high-end cluster with 64 TPUs did the job 18 times faster.

In 2016 Giraffe peaked at Elo 2410 in engine competitions, which is remarkably weak as even the current version of the good old GNU Chess is rated Elo 2800+. Given the initial results, the decision to keep investing in the idea is quite remarkable too.

As I wrote in the previous article, my Stockfish, casually running on just one core on an Intel i7 4760 3.60 GHz, took roughly 75 minutes to find the star move at depth 41. Hardware is the bottleneck. Just for comparison: Massive hardware upgrades almost doubled the playing-strength of AlphaGo. It simply expands the search-horizon.

Looking at the difference in pure hardware power this reminds me of David vs. Goliath. Running on identical machines, Stockfish should beat AlphaZero easily. Drawing 72 games with such a handicap is actually amazing. Let’s not kid ourselves, Stockfish on 4 TPUs would beat Stockfish running on much weaker hardware convincingly too.

One thing is clear: It will take a few CPU-generations until mere mortals like you and me will be able to run AlphaZero at home.


Update (01.01.2018): Over three weeks later the other 90 games have still not been released to the public. I wonder why.

More on AlphaZero

Fourty years ago the New Yorker published an interview with the math professor Paul Magriel aka X-22 who became Backgammon World Champion shortly after. In order to develop his tournament strategy he did the following:

“I used to play backgammon against myself,” he said, “and once I had a private tournament with sixty-four imaginary entrants, whom I designated X-l, X-2, and so forth, through X-64. In the final, X-22 was pitted against X-34, and X-22 won.”

Source: New Yorker, “Playing X-22”, 5th of December 1977.

According to this paper, AlphaZero pretty much did the same:

In AlphaGo Zero, self-play games were generated by the best player from all previous iterations. After each iteration of training, the performance of the new player was measured against the best player; if it won by a margin of 55% then it replaced the best player and self-play games were subsequently generated by this new player.

This sounds rather easy in theory, but it’s not that easy to code. While Magriel could make the deliberate decision to play for certain points or use the cube in a certain way, AlphaZero modifies each player based on what? There is certain difference in style between Tal and Petrosian, but how do you formulate this in numbers? In other words, it’s not easy to describe a style in a formal language or as an object. Stockfish is much easier to configure, because you can just give weights to certain positional features and you can modify the value of pieces. I guess the solution to this problem is worth the 400 million dollars that Google paid for DeepMind in 2014.

Initially I thought that there is a pretty good chance that the whole story is just a scam, like the match Slyusarchuk vs. Rybka. There is even an incentive for manipulation. Just check out how the stock market reacted to the annoucement.

After I saw the following game, I could pretty much exclude all of that. The move 21. Bg5 is simply an amazing bolt out of the blue that basically wins on the spot. It takes Stockfish over an hour to evaluate the move correctly at depth 41. The idea is hidden so well, that it could easily qualify as preparation for a World Championship match.

AlphaZero beating Stockfish

Well, it seems like the new age of chess has arrived. Obviously even AlphaZero cannot improve upon forced draws, so this project is still quite safe. Nevertheless I wouldn’t be surprised if the american players get access to AlphaZero opening analysis for the candidates and it could even lead to an american world champion in this cycle.

Reading between the lines

Sometimes games are interesting not because of what has been played, but because of what hasn’t been played. In this brand new Dragon-encounter the frenchman with two names, chose a move-order against Naka that avoids both, the Chinese Dragon and the Topalov Variation. That wouldn’t be anything special if it had not come at a price. White can’t go for the the official refutation of the Dragon 12. Kb1 anymore. What does MVL tell us? Either the Chinese Dragon or the Topalov Variation is safe to play (or both)!

Grand Prix for Depressnyak

Some people believe that blitz is the true test of talent. If that is the case, then the following game proves that Grischuk is very talented. First he comes up with a positional piece sacrifice and then he finds a very unique mating pattern. This was one of these jaw-dropping moments. WOW! Imagine how MVL must have felt after this hit.

After the match someone mentioned that this was all known up to 13. Bxe7. If this is true, it could have been preparation, because Grischuk simply plays like an engine from there. The whole line is a forced draw if black finds 17…Rd7, which also suits his freeroll-style.

Another line bites the dust

The Nakamura-Variation of the Classical French is the playground for computer based Drawmeisters. The evaluation 0.00 is just too tempting, even if the position looks as ugly as this one. The risk of playing such lines is that you can make your first own move in a lost position and that is exactly what happened here.


Today a french Drawmeister shows how to deal with a former world championship candidate. Well done 🙂

Clash of Titans

Congratulations for winning the tournament.

Here we go again

I assume that everyone remembers Hou Yifan getting paired with one female player after the other in Gibraltar. Well, guess what happened in the Isle of Man Tournament so far! In the first round she was paired with Alexandra Kosteniuk and in the second her opponent was Elisabeth Pähtz. That’s some weird coincidence, isn’t it? The chances that the tournament software doesn’t contain a hidden bias, or that someone isn’t trying to rub it in, are very small.

Update: 3rd round – third female opponent! Come on, WTF is that?
Update: 4th round – fourth female opponent 😉
Update: 5th round – no pairing! Apparently she took a bye.
Update: 6th round – finally a male opponent (Elo 2481)!

Let’s not forget that Hou Yifan (ranked 75th in the world) gets lots of recognition and many invitations, that higher ranked men don’t get, because women are supposed to be equal to men in chess playing strength. In other words, she profits a lot from theoretical equality. Since she gets paired with equal opponents, she shouldn’t be upset.

P.S.: Here is a report on the previous incident. It shows that there is a way to create such pairings while staying within the boundaries of the system, hence the perfect crime so to speak.

Play it safe!