I think it is the way they blithely mention some statisticians prefer to use other methods, with a list of examples, that suggests they are blessing them. Surely these other methods ought to be scrutinized, we don't know they wold detect irreplication as significance tests do. The big issue is really all about using frequentist methods with biasing selection effects, multiple testing, cherry-picking, data-dredging, post hoc subgroups, etc. Only problem is that many who espouse the "other methods" declare that these data-dependent moves do not alter their inferences. Some are vs adjustments for multiplicity, & even deny error control matters (This stems from the Likelihood Principle.) If you consider the ASA guide as allowing that (in tension with principle 4 vs data dredging when it comes to frequentist methods) then the danger the authors mention is real. What was, and is, really needed is a discussion about whether error control matters to inference.