I don't speak from authority, but I do speak from experience as an academic. Scientific Reports is run by Springer, who are more competent than Elsevier but just as predatory. Their acceptance rate is 48%, and in 2021, they published 23000 articles. (You can check this here: https://www.nature.com/srep/research-articles?year=2021.) At a publication fee of almost $2000 (Wikipedia), this is a money-making machine, as skewered by https://www.youtube.com/watch?v=8F9gzQz1Pms. I think it's obvious what the incentives are here.
The reason the method is not robust is that the typical polygenic score explains only 10% or less of the variance of its target phenotype. That leaves 90% of the variance unaccounted for by the control, which means your error term will be correlated with your focal dependent variable, violating the requirements for regression to give an unbiased estimate. I don't think these claims are controversial. We know polygenic scores are noisy. We know what happens when your control variables are noisy.
The fact that lots of people do it doesn't, sadly, make it work. Lots of social psychologists run trials with an N of 35 (though they're addressing this critique, to their credit). Lots of historians fail to specify their hypotheses and to search for disconfirming data. Economists spent the 80s and 90s running cross-country regressions, before realizing that they had, in aggregate, more independent variables than cases. And so on.
The reason the method is not robust is that the typical polygenic score explains only 10% or less of the variance of its target phenotype. That leaves 90% of the variance unaccounted for by the control, which means your error term will be correlated with your focal dependent variable, violating the requirements for regression to give an unbiased estimate. I don't think these claims are controversial. We know polygenic scores are noisy. We know what happens when your control variables are noisy.
The fact that lots of people do it doesn't, sadly, make it work. Lots of social psychologists run trials with an N of 35 (though they're addressing this critique, to their credit). Lots of historians fail to specify their hypotheses and to search for disconfirming data. Economists spent the 80s and 90s running cross-country regressions, before realizing that they had, in aggregate, more independent variables than cases. And so on.