Hacker News new | past | comments | ask | show | jobs | submit login
Google Research Football (research-football.dev)
173 points by a_imho on April 29, 2020 | hide | past | favorite | 73 comments



I'd be interested in seeing something like this where each individual player has skill levels for different actions, varying across both positions and teams. Not sure how to make the competition fully fair then (perhaps a sum skill level "salary cap") but it would be cool to see what strategies might arise when a team knows the strengths and weaknesses of their own players and can learn (maybe starting with a reasonable ballpark guess from "scouting") the weaker points of a given opposing team.

To make a ridiculous analogy to the other football, in NCAAF video games the mascot mode had every "player" on every team have 99 ability in all possible skills. Which means that any player standing on the field could throw as well as the quarterback, punt as well as the punter, etc. Opening up many unrealistic trick plays.

But even adjusting for positional differences, I think a lot of the cool sports stories have involved seeing how a particular team strategizes against another particular team, which goes away when all teams start with equal across the board abilities.


There is an upcoming contest to write an AI for an (American) football based board game, Bloodbowl. The players have different stats as you describe, and the teams can be asymmetrical (though for the competition, both teams select 11 players from a fixed 12 player roster, so not too asymmetrical).

Here is a link to the competition: https://njustesen.github.io/ffai/bot-bowl-ii

(As an aside, just wanted to mention that the original article is actually about soccer, rather American Football. The comment I'm replying to is certainly about American football, so I feel my comment here is fair.)


Thanks for sharing, I've never played Bloodbowl but heard about it a few times now, will have to look into this!

(I honestly don't know enough about soccer to know how much positional variation or asymmetric skills matter. I assume they do but probably to a lesser extent than football, which I also know way more about, hence going on a bit of a tangent when trying to make the idea concrete.)


The mentioned blog post is mislinked in the article but it's been on HN before with a bit of discussion. It has some video and a better overview.

https://ai.googleblog.com/2019/06/introducing-google-researc...

Thread at:

https://news.ycombinator.com/item?id=20135726


Thanks for pointing out. Fixed the link on the page.


With all (human) competitions cut or postponed, watching a bit of zany machine football was especially enjoyable, thanks!


RoboCup was founded almost 23 years ago. While the physical robot leagues often garner most attention, there has always existed a simulation league, of both 2d and 3d varieties. https://ssim.robocup.org/


Yeah, but in RoboCup 3D the robots fall all over themselves and can barely play: https://www.youtube.com/watch?v=7T1ElDs5eSQ - RoboCup seems focused more on simulating robot movement than the game itself.

This new Google competition appears to be more about ML and playing the game, judging from videos: https://www.youtube.com/watch?v=F8DcgFDT9sc


That's a fair characterization; the 3D league is meant to incorporate physical constraints, and learning "through" those to the higher level aspects of the game is challenging. That's what the 2D league exists for though.

The Google simulator lets you learn from pixels if you want to, but the agent that you're controlling only has 8 actions [1] available to it, so the learning problem here really has no bearing on a robotics application or anything in the real world that I can think of.

[1] https://arxiv.org/pdf/1907.11180.pdf


RoboCup also has leagues for wheeled robots, where there's a lot less falling over: https://www.youtube.com/watch?v=_Y5_iGxWFrQ

The humanoid robot leagues are constrained by cost - several of the simulated leagues, including the one shown in your video, have a corresponding physical league. And the physical league needs two teams of NAO robots, which cost ~$8000 each, and there are 11 on each team. So you're looking at $176,000 of student project for the physical league. You've correctly identified they don't work very well - but more agile robot hardware would cost even more. I've never seen two atlas robots together, let alone 22!


One only has to wonder; why not to deploy and benchmark there's "novel" methods against learning models developed over 20 years at least in 2d environment, but instead choice to develop new environment and call it novel.


They should at least list RoboCup 3DSim as prior work and explain the differences, but I can't find it.


Awesome. Finally we'll know if possession is the true answer.

After looking at the video, I get the feeling I could beat the winning team; it definitely misses tons of clear passing opportunities that would open the play.


Happy to see it is about "actual" football. I am embarrassed to say that football is one of my life-long passions, especially now that I am older than most players. But I would love to work in any area even tangentially related to it.


I don't know much about this football so it is probably way more nuanced than I'm giving it credit for, but I'm actually disappointed this is not about American Football. I think it is an interesting problem in that there is both the play calling decision made by the coach, and then the choices each (highly specialized) agent makes during execution. In addition to learning the physics to actually execute well of course.

There is also a nice hierarchical structure i.e. there is a clear outcome to each play, to each set of downs, to each possession.

Obviously this mostly has to do with personal preference, and I know that's all you were commenting on. But I've encountered enough people that seem to think football is a mindless game that I wanted to get a comment out there when I saw the topic come up.


I think one difference between American Football and Football is that American football tends to have specialized events happening. You have field goals, you have turnovers, you have punts, and so on. Each of them require specialized instructions, someone who is a wide receiver will not share the same skill as a QB. In Football, by and large the skillsets that everyone develops is the same, i.e. good ball control and fitness. There is specialization when it comes to specific positions, but it is not as drastic a change. This could allow you to get away with simpler player models in Football, everyone can basically be modeled by a simple "footballer", instead of different people for "kicker", "quarter back", "line back" and so on. Potentially, this can make sport simulation environments for RL like this easier for football.


As a fan of both American football and soccer, I'd love to see them both done. It would be interesting to see what an AI could do with the totally fluid nature of soccer, and also the hybrid poker+RTS aspects of American football.


I once read something along the lines of " Soccer is a RTS game and Football is a TBS"


It isn't about 'actual' football, it is about a simulation.


a simulation of humans playing football rather than robots playing.


Disney did some absolutely fascinating research here a few years ago. I suggest anyone interested in Deep Learning and Soccer checks out the following video:

https://www.youtube.com/watch?v=WI-WL2cj0CA

"Data-Driven Ghosting using Deep Imitation Learning"


Supposedly Liverpool are using this "ghost" technique as well. in training to change player behavior, using the "expected goals" stat as the main metric or representation of the field state. It works well with Klopp's system.


Are there any regulations against transmitting data to wearables during play? I only see a regulation saying that data from wearables can't be used during a game. Is there anything preventing players from wearing e.g. a device that vibrates when they're not optimally positioned, according to camera tracking


I'm pretty sure it would violate the "spirit of the game" rules by FIFA. Not sure if during halftime teams using digital analysis would even be allowed.


I've played with this a bit (I work in football analytics and play with tracking data a lot) and it's disappointingly unlike real football. Defenders are mostly useless ghosts and shooting seems wildly more accurate than in real life. Also a shame that the multi-agent track of the tournament is only 5 players - most of the off-the-ball behaviour of agents with the existing checkpoints is terrible.

If you wanted to think about this stuff with real football in mind, there are worse places to start than here:

https://www.youtube.com/channel/UCUBFJYcag8j2rm_9HkrrA7w


I haven't played around with this system, but judging by the videos it seems they should have vetted their physics system against some priors from real-life such as defender's advantage (attacking is harder than defending) and relative difficulty of actions (defending shouldn't be so much harder than shooting, dribbling and passing). It might also be good to have basic structure for off-the-ball behavior to make it more realistic and organized (e.g. maintain 4-4-2 shape), so that there is always 1 defender behind each opposing player when the team doesn't have possession. That way, it won't look so much like a schoolyard game.


So the state includes the positions of all players and of the ball? That doesn't sound like solving football at all.

Why not build on the RoboCup 2D Football simulation league? It is a more realistic environment, where each agent has to sense the environment and decide what to do with some limited communication with other agents and with the coach. Furthermore, it already has some state-of-the-art teams. And finally, it is a step on the road to actually having a team of physical humanoid robots playing against a human team.

As is, this does not look very interesting and does not look too dissimilar from the Atari reinforcement learning.


Interesting - this goal is scored by hacking the offside rule: https://youtu.be/c3dv8s2SR0c?t=54

The scoring player was not in offside because they are on their own half of the field when the goalie passes the ball.


That's hardly new - every 8-yo central defender in half-decent teams is taught never to cross the midfield line precisely for that sort of situation.


Two recent(ish) examples in real life:

Salah vs Manchester United: https://www.youtube.com/watch?v=STjxoGgFIzk

Torres vs Barcelona: https://www.youtube.com/watch?v=lKOXBLTsu3s


Torres for Barca is because it's 90th of the minute of the game and Barca are about to be knocked out of CL. At that point, everyone goes up. If the ball went for a corner, the goalie would come up as well. I'm assuming it's something similar for that Salah goal as well. Context matters as well.


Yeah, the Salah one was a situation where United were 1-0 down at 90' and prodding desperately for an equalizer.


Torres was a bit of a flop at Chelsea. This goal was kind of a saving grace.


I still don't understand the reasoning by Chelsea there - any Liverpool fan watching the previous year knew he wasn't the same after that injury. Carragher says basically exactly that here (though the Andy Carroll buy was interesting as well): https://youtu.be/58hK8XnwJVQ?t=1050


What has that got anything to do with the parent comment?


It's a random comment on a random website. Chill.


Allison what a pass. My gosh. Fantastic buy for Liverpool


A very underused but imo effective tactic for beating the offside (IRL) is having a player offside behind the defenders, and playing them onside by a winger running down the flanks. When it works it looks almost like cheating.


Unless I am misunderstanding your tactic, this is almost exactly how Pep's City plays. They frequently rely on a pacy winger beating the defender out wide and cutting the ball into a forward who has beaten the defenders in the center.

You wouldn't be able to play the player onside by just having a winger, they would need to have the ball with them for it to count.


Not exactly, I tried to find a good example, this is the best I found.

https://youtu.be/4KDlSb8Qjp8?t=199

if the above not works, this is the video ~3:20 https://www.youtube.com/watch?v=4KDlSb8Qjp8

Basically the forward(s) are goal hanging in the offside position way behind the defenders and they are played onside by a quick wing play.


How is that hacking the offside rule? That is a basic attack in football.


Neat!

Watching the games, it looks like the effectiveness of dribbling is unrealistically high. Most scoring plays consist of dribbling around 1 player, then dribbling straight at that back line and hoping to win the take-on.


I would love to see some sort of AI be able to play and then understand the rules correctly. Then eventually replace/aid referee's on the field.


I'd be interested to watch some video from some of the games, if anyone knows if there is any?



Google's answer to Amazon's DeepRacer?


Deep Racer extended to mario kart „combat“ features would be a really interesting topic. So they had to learn deiving and combating and everything is a tradeoff between these two functionalities: Maybe driving a little bit worse will result in a better position if you can E.g. better target your opponents with the shells.


I got confused when the Google thing wasn't in American English.


[flagged]


???


Oops! That comment should have read "Url changed from https://github.com/google-research/football, which points to this". (Edit: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...)

"p" is a keyboard shortcut in the software I wrote to semi-automate repetitive moderation tasks. It's short for "points to" and constructs a comment with that text based on the before-and-after URLs. I can't fathom how that "p" ended up here instead. It guess it will remain one of the mysteries of moderating after midnight and a finite amount of good beer.


Flagging a dang comment was the highlight of my day. :p


What did I miss? Such confusion


Nitpick, but it recurs:

s/challange/challenge/g


FWIW this is from Zurich-based folks so they're presumably not native English speakers... and challenge is not exactly phonetic :)


They are also presumably comfortable with improving their work with feedback, even in relatively small ways. Some HN readers seem to believe otherwise, though.


Fixed, thanks!


Looks more like soccer to me.


Dude you’re either trolling or you’re just extremely oblivious.

Do you realize that football is the most popular sport in the world and almost every country call it football? (you know, because you’re controlling a ball with your feet)


> Do you realize that football is the most popular sport in the world and almost every country call it football?

So? Hacker news is an american site and we call it soccer.

> (you know, because you’re controlling a ball with your feet)

Actually, football is a class of games which involve kicking a ball to score a goal. Everything from "football" to soccer to rugby has kicking to score goals/points.

"Sports commonly called football include association football (known as soccer in some countries); gridiron football (specifically American football or Canadian football); Australian rules football; rugby football (either rugby league or rugby union); and Gaelic football."

https://en.wikipedia.org/wiki/Football

If you really want to be accurate, then you should call it association football. And the word soccer is derived from assocation football.

https://www.etymonline.com/word/soccer


This is "football" in most parts of the world - or the latinized futebol / fùtbol.


soccer is shorthand for association football.


“ Google Research Football Environment is a novel Reinforcement Learning environment where agents aim to master the world’s most popular sport -- football.”


Are we done with games yet? I thought it’s been well shown that you can beat any computer game using RL. Would like to see people move on from these types of things and engage more with the difficulty of solving real world problems with AI.

At this point, I’m not really sure that the progress in playing games is carrying over to much in terms of solving real world problems, but curious to see if anyone has good counterexamples.


In theory, with accurate passing / historical data you could build out strategies that could translate to the real world from this setting. Football has two parts to it, one being the strategy aspect of moving the ball and beating the opposition position, and the other of actually executing these instructions physically. While the Google framework lacks on the latter (no sports simulation game is really close to the real world in terms of player models), the former can probably be learnt via RL, and maybe open itself to some neat counters to traditionally established strategy.


There are still lots of things that are more easily answerable in games to see if they exist.

For example, can we get AI to play deceptive strategies without being explicitly rewarded (Open AI hide and seek game, StarCraft fog of war plays, etc)?

As to whether it can translate to solving real world problems is arguable (there are some cases yes), but it definitely helps out in weeding inference models or training strategies that are not viable in games (and probably not the real world).

edit: Forgot to list an example. Using the same reinforcement learning strategies not to play games, but to design them.


Thanks for the perspective. I have been asking this question for a few years now, and I think Deepmind's marketing has been pretty deceptive on this. What examples do you have in mind where the research from playing games with RL has carried over into solving a real-world problem? Probably the state of the art has advanced as I have focused in applying supervised learning on some applied problems that interest me.


"can we get AI to play deceptive strategies without being explicitly rewarded"

Haven't we seen quite a bit of that? I think there have been multiple items on HN about instances where ill-defined goals led to AI finding bugs in a simulation or unintended solutions. Deception is really just about human expectations.


> it’s been well shown that you can beat any computer game using RL.

Starcraft has not been beaten yet. DeepMind's AlphaStar is a grandmaster, but is nowhere near world champion levels.


It's really hard to say with AlphaStar, because it depends enormously on what type of APM constraints the developers put on it. As much as I love the strategy of StarCraft, brute force is incredibly important. Serral, the best player in the world at the time by a wide margin, played some fun games against a few lower level European pros controlling the same opponent and it was so lopsided he quit after ~2 games.

If AlphaStar is allowed just slightly too much micro it's easily world champion beating (eg the version that killed TLO in the original demo, it was winning fights that should have been blunders), and too little and it can get squashed by regular GMs.

I don't know how the competition can ever really be calibrated to be fair. The latest version of AlphaStar was fun because they tuned it to be good but not unstoppable, but I have no idea if those were 'fair' settings or not. Maybe AlphaStar was too handicapped.


It's probably not even grandmaster if you restrict it to human-level mechanics. I know what DeepMind has said about APM, but if you look at how it functions it's still superhuman (like the fact that it never uses control groups since it can always arbitrarily select whatever subgroup of units it wants instantly with no mistakes).


Deepmind's AI would be ranked in 5800-6000 MMR range by Blizzard if they would actually use the matchmaking instead of random opponents and a ELO calculation formula which wasn't suited for the circumstances nor used by Blizzard. It's actual win-loss ratios against opponents below and above 6000 were: T — 15-4 and 3-8, Z — 15-8 and 3-4, P — 26-1 and 0-3 (which produced whooping 6352 by formula DM used).


This completely ignores what I just said.


It's great to see new football strategies. But I agree, personally I'm most excited about proving/discovering new math theorems using ML.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: