I am not aware of an attempt to scientifically approach software engineering generally (though it seems like the type of thing that would have been tried). I have seen some pretty scientific approaches to performance and efficiency. What do you think should be done? Perhaps more peer review. Blinded testing of code complexity and development difficulty for different code systems?
When people say they think software engineering should be approached more scientifically, usually what they mean is that they want a scientifically-verified process that repeatably produces optimal results. Unfortunately, what science has been done has generally shown that good people produce good software, bad people produce bad software, and that it doesn't seem to much matter what process you lay on top of them. (To the extent that sounds tautological, well, I'm summarizing here.) It is broadly accurate to say that what scientific evidence there is about software engineering leads to the conclusion that science isn't going to give us that science-approved unified single process that people are asking for. So we'll continue to hear this complaint for at least the next several decades.
(Science can nibble around the edges of the problem of software engineering, but even what results we have strike me as likely to suffer from the usual problems of taking small isolated samples from an n-dimensional space and then trying to extrapolate. For one thing, almost every study you've ever heard of that establishes some "fact" about software engineering was done on students. Scientifically speaking, there's no particular reason to expect such results to translate to professionals in any particular manner.)