The gist is that you would roughly look for deviations from theory on one (probably small) dataset, you'd identify any regions of interest and then run a proper analysis of that region on a different dataset to see if the effect is real.
The problem with this approach that I see is that it's much more likely to pick up lingering detector issues than the regular test-a-theory. (The difference between looking for a specific thing in a specific place vs picking up anything unexpected.) I wouldn't worry do much about purely statistical artifacts because those can often be worked out with prescriptions and remeasuring. The systematic but not-understood biases are the ones that should plague this approach since in the extreme they would need an independent experiment. I wonder whether CMS and Atlas are sufficient for that.