Hacker News new | past | comments | ask | show | jobs | submit login

This is not a reasonable summary. The adversarial AI is not finding some weird position that relies on KataGo not understanding the status. It's relying, supposedly, on KataGo not understanding the ruleset which uses area scoring and doesn't include removing dead stones (because in area scoring you can always play it out without losing points, so this is a simple way to avoid disputes between computers, which don't get bored of it).

I assume that KataGo still has this "flaw" after adversarial training simply because it doesn't overcome the training it has in environments where taking dead stones off the board (or denying them space to make two eyes if you passed every move) isn't expected.

See https://boardgames.stackexchange.com/questions/58127 which includes an image of a position the adversarial AI supposedly "won" which even at your level should appear utterly laughable. (Sorry, I don't mean to condescend - I am only somewhere around 1dan myself.)

(ELO is sometimes used in Go ranking, but I don't think it can fairly be compared to chess ranking nor used as a metric for "level of effort".)




There are multiple examples from this research group.

I believe my discussion above is a reasonable survey of the cyclic attack linked to at the beginning of the website.

https://goattack.far.ai/game-analysis#contents


What we need are more sides to the argument. I'm pretty sure you're both off.

zahlman doesn't seem to have read the part of the paper dealing with cyclic adversaries, but the cyclic adversary strategy doesn't depend on KataGo mis-classifying alive or dead groups over long time horizons. If you watch the example games play out, KataGo kills the stones successfully and is trivially winning for most of the game. It makes a short term & devastating mistake where it doesn't seem to understand that it has a shortage of liberties and lets the adversary kill a huge group in a stupid way.

The mistake KataGo makes doesn't have anything to do with long move horizons, on a long time horizon it still plays excellently. The short horizon is where it mucks up.


I don't suppose you could directly link to a position? It would be interesting to see KataGo make a blunder of the sort you describe, because traditional Go engines were able to avoid them many years ago.


Consider the first diagram in the linked paper (a, pg 2). It is pretty obvious that black could have killed the internal group in the top-right corner at any time for ~26 points. That'd be about enough to tip the game. Instead somehow black's group died giving white ~100 points and white wins easily. Black would have had ~50 moves to kill the internal group.

Or if you want a replay, try https://goattack.far.ai/adversarial-policy-katago#contents - the last game (KataGo with 10,000,000 visits - https://goattack.far.ai/adversarial-policy-katago#10mil_visi...} - game 1 in the table) shows KataGo with a trivially winning position around move 200 that it then throws away with a baffling sequence of about 20 moves. I'm pretty sure even as late as move 223 KataGo has an easily winning position, looks like it wins the capture race in the extreme lower left. It would have figured out the game was over by the capture 8 moves later.


I see what you mean.

So dead man walking is a bad description. From your perspective it's still KataGo winning but a series of serious blunders that occurs in these attacks positions.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: