I think the vast majority of criticism here does not target the research per se, but rather the way the results are "hyped" and presented as a massive break-through. I agree with this criticism, and also think that the two positive Nature reviews seem rather shallow, at least from a non-expert's perspective (this is not your fault, of course). When it comes to long term impact, I'd find it interesting to discuss how your work can (ideally) interact with proof assistants like Lean. Also, the work around Lean is a good example of a "hyped" topic that is presented by its contributors with caution and modesty.
I don't really see much value in debating the procedural aspects (Nature review process etc).
I see a lot of value discussing the research and its content.
We think the results shown in the paper are significant and of some importance, and so do others who reviewed our work.
This is where I think the focus should be.
Please read our paper and not only the blogs criticizing it:)
There is a link to access it here: https://rdcu.be/ceH4i
Then don't take offense by the discussion here, because it's mostly on some "meta" aspects of science communication, and you are probably not responsible for any of the aspects that have been critizised.
Regarding the research itself, I am not an expert, but I am curious to learn how this line of research (automated conjecture generation) intersects with proof automation/proof assistants, and in particular with the work that the Lean community is doing (creating an "executable" collection of mathematical knowledge). Perhaps there are some works you can point to.