Well for example if people don't understand the difference between continuous, discrete, and unrelated values then they will have major flaws. For example, say they are trying to build a NN to predict customer orders by geographic area. If they treat zip codes as continuous or discrete values they're going to get really strange results because ML is ultimately just interpolation/extrapolation. Idk how well drag and drop can convey those principles
I see user skills and the fact that the tool is drag and drop as two different issues, in this context.
A fluent Python developer that doesn't understand basic ML concepts can easily use something like Scikit to code and build "wrong" models. By "basic concepts" I mean standard tasks like data preparation for specific algorithms, sampling, evaluation methods, testing for bias, or just generally how to properly execute most ML tasks like the ones prescribed by something like CRISP-DM.
Someone with basic coding skills - e.g. knows SQL and some imperative programming - but with a solid understanding of ML tasks, and how to execute them properly, probably has a better chance of coming up with better results than the former, using something like IBM Modeller or RapidMiner.
Note that I'm not saying that a drag and drop tool is superior; you could build a flow-based GUI for Scikit, so a tool like this is always, at most, an interface to some code libraries (Scikit, in this example). Having full access to the actual lib, or just better libs, is likely to be less constraining, and more apt for more sophisticated approaches.