I haven't seen much discussion in defending against timing attacks, is it really that difficult? If we introduced long delays and artificial data sizes shouldn't it be possible? A constant stream not withstanding. Of course it makes things clumsier, but that's always the price.
I'm not sure how effective that would be. They see you go in and they see you go out. If they can use the process of elimination, if nothing else, you can be unmasked. Those types of things would probably only make it more difficult, but not impossible.
If you're going to do illegal things, or want anonymity, remain on the network - domains ending with .onion. The regular web is inherently not anonymous.
I don't have any experience in the field, but reading the paper, it seems impossibly weak and almost useless. This should only work for a very limited type of game, in which case it will never help anyone speed up game construction more than a simple sprite engine. It seems more an exercise in using openCV.
What am I missing here, because I'm positive that I am missing something?
Imagine this paired with a system like the one Bret Victor demos in "Inventing on Principle" [0] or in "Stop Drawing Dead Fish" [1], in which the user hand-animates what they want the game movement/overall gameplay to look like, adjusting as necessary. The system described in this paper could get you a decent way from that hand-animated mockup to a working game that looks and feels first and foremost like you intended, rather than working from a playable-but-bad-feeling prototype game engine and having to endlessly adjust to get the game feel "right".
Looking at the paper the set of games it can build seems rather limited.
It could work for cookie cutter platformers (the sort of games that engines that don't require programming let you build).
You're missing that not everything has to be a breakthrough. Sometimes fun research experiments are just cool and fun. I didn't get the impression that they were overselling the research.
Sure they are, "recreate game engine" when it actually is "optimally match a database of cooked if-then rules with simple linear functions". For one Mario level. Taking 2 weeks of runtime to learn it.
The set of facts is worthless for anything of complexity.
It does not really generate the rules itself. (They are directly derived from the facts.)
What they did is only a small improvement over a typical expert system or CNN for a very limited case.
Choice quote: "Notably each fact can be linked back to the characteristics of a sprite that it arose from."
Wrong. When you pick up a flower your sprite changes, but how does it know you can suddenly shoot bullets? Etc. And for more complex games a lot of data requires exploration well past the GUI. An action might change acceleration (suddenly nonlinear ice physics with momentum), or direction handling, or you can start flying, or many other things. What if the thing moves in a circle? What if there is just some probability that something results?
The approach will fail at modelling as soon as Mario level 1-4. (The one with rotating fireballs.) Or produce an insane representation of the engine. Note how it even cannot model the dampened triangle wave motion of the fireballs in the example - assumes they're a sparse line.
The paper presents no way to reduce these huge number of "if-then rules" to something actually useful either.
Since this doesn't even attempt to explore the state space, it also requires a huge database.
Calling this "recreate game engine" is akin to saying that since we have an algorithm that can solve checkers, it will solve poker, go and also whodunit. And can play Jeopardy too.
I even suspect it's not useful as a preprocessor to something that can actually play a game, as it will break later cases.
I was pretty impressed by the result until reaching " a relatively simple search algorithm that searches through possible sets of rules".
CNNs have done such impressive things that "outperforms convolutional neural nets" sounds like an achievement, but CNNs have never been the pinnacle of accuracy - their key advantage is flexibility. Feature learning costs some reliability, but gives a huge advantage in saving human time and effort.
This appears to be exactly the opposite approach, an AI system that gains its accuracy by working from heavily pre-defined rulesets. Feature engineering is fine in a stable, well-understood domain, but it reduces the impressiveness of the 'AI' result. And more worryingly, it cripples the flexibility of the agent in a open domain like "video games".
Hand-authoring a set of functions required to derive the model means embedding a huge portion of the game engine in the engine-learning framework - what's left to learn is basically just parameter values. Mario without powerups is a game entirely defined by 2D movement, collisions, animation, and a tracking camera. That's the same feature list that had to be hand-defined for the engine.
I don't mean to attack the authors. This is still an interesting result, and they do acknowledge this in P2 of 'Limitations'. (Albeit with some lofty claims about eventually understanding real video - are they planning to encode physics as their ruleset?) But the article really oversells the capacity of a system that was spoon-fed the essentials of what it had to learn.
People generally are willing to forgo the cost/benefit analysis of a machine learning solution. There is an abiding faith in future improvements in cost although I am not so sure anymore.
An artificial intelligence observed a game being played and recreated it and you aren't impressed at all? Man, the future must be boring for people living in it.
Yes, the technique it uses only works for a certain space of possible games. That means there is an obvious path to increasing the size of that space.
"Observed" after being fed lots of sprites and actual ways on how to play it and actually win at it in the objective function. And it "played" only one kind of game.
"Obvious path" riiiight.
In addition, this is wrong to having been said to be new, such attempts have been made before and even stronger in results and generality. For example this (relatively dumb) approach from 2013 generalized kinda well, much better than I've seen a silly even deep network generalize: http://www.cs.cmu.edu/~tom7/mario/
So yes, they are overselling it a lot. I am 100% not impressed by this paper as it lacks critical detail.
That it can parse stuff from 2D frames is not interesting, it is basic motion analysis which can be done even by a supremely stupid algorithm, not even a CNN.
I mean, Google best AI can play 15 rooms of a simple game...
You are comparing a system that learned to play a game (which indeed was very impressive), to a system that learned to make the game by observing the behavior from video. None of your points actually relate to the system described.
By "make" you meant "match some sort of a simple function approximation after hardcoding lots of knowledge about the system and the general function" right? Which is essentially what the neural networks and all the other optimization algorithms were made for?
(The algorithm as described will require a huge database for a game that is even slightly more complicated than Infinite Mario. And we don't even have the sources to try that.)
Even the object motion tracker part will choke in 3D environment. (It is a greedy matcher as they described it.)
Speaking of impressed, Google DeepMind paper is way more actually feasible to improve upon and rich in detail: https://arxiv.org/pdf/1606.01868v1.pdf
Compare the two papers in straight quality.
I understand why you'd publish any worthless junk in the current academic culture and do not agree we should actually do it.
My complaint is that the path to improving their space is "humans hardcoding endless rule lists".
Section 3.1 of the paper outlines a list of 'hand-authored' functions the agent used to derive events from images. They include animation, sprite-entity relationships, motion, collision, and camera movement. Which is to say, every component of Super Mario level 1-1.
That doesn't mean the paper is uninteresting, or useless. Defining facts based on those possible rules is still an intriguing result. I'm having real trouble working out from the paper how well their agent understood conditional changes like size and fire flowers - if it accurately recreated those rules, then I am impressed.
But "modeled without accessing the code" is a dubious claim about an agent that started with a list of the core rules included in its code. The Engine Learning section (3.2) mentions that automatic derivation of possible facts is a key area for future work. That is to say "this would be flexible if it did feature learning instead of needing feature engineering". Unfortunately, that's the problem in agent design, and the value of CNNs isn't unbeatable performance but the capacity for flexible feature learning. The press release here elides the issue of feature learning entirely when comparing performance.
The point of this is what is known as model-based learning. Basically, the long-term goal is to be able to predict the output of a given action (jumping, walking left, etc.) by an AI agent. When you can do this, then the agent doesn't need to die to know that jumping down a hole will end the game-- it can predict it. Once you've done this, AI techniques like that of Watson can control robots. They won't need to kill someone to know that driving a pole through a head is no good. They'll be able to 'reason' it out.
They are showing an approach that work, on a semi-realistic problem.
The idea of producing a rule-based system from deep-learning, while not exactly a breakthrough, is an interesting direction to take.
It is research. It is not designed to solve real-world problem but to give ideas to engineers. And really, I can see several simple systems that could be programmed by simple rules and learned from input/outputs.
I also thought that it was ironic. He does mention a valid flaw in the one-to-one voting system. I wonder if quadratic voting, in practice, encourages extremism. It also offers more choices, so more time spent thinking, and less easy to make a choice. Perhaps it discourages voting for those who care just a little.
I would argue that in terms of doing good, investing money in concrete and well founded technology causes a larger impact. We can already eradicate diseases, we can already end world hunger. We can make so many miracles come true, what good is it to create further miracles if only the very few ever get to have them?
An interesting phenomenon in spatial thinking is that scalemakes a big
difference. We can think about little objects in our hands, orwe can think
of bigger human-sized structures that we scan, or we can think of spatial
structures that encompass us and that we move around in. We tend to
think more effectively with spatial imagery on a larger scale: it’s as if our
brains take larger things more seriously and can devote more resources to them.
Really? I prefer imagining things in front of me, they become more manageable at a scale similar to a paper or me, rather than my house. It just seems more natural, does anyone have any insights on this?
The brain has dedicated subsystems for different aspects of spatial and visual understanding.
Certain groups of cells (place cells, grid cells) are known to be involved specifically in the ability to track one's own location and movement within the external world. This ability is likely a very old and important evolutionary invention in the development of higher animals, thus very sophisticated and well integrated with other functions, particularly memory. Think of the method of loci, known already in Ancient Rome and Greece, or indeed how quickly one learns to navigate a new city (or a video game level, say). This probably explains the phenomenon the author describes.
That said, the relative importance of the various spatial-visual subsystems of the brain probably depends heavily on the task, and perhaps also varies between individuals.