Yet you yourself also need to 'explain' as to why that statement is 'nonsensical'. It is no good just saying it is without giving any explanation.
Unless you can give a thorough explanation as how these LLMs internally can explain themselves transparently and reliably to the point where we don't need to check their outputs?