The challenge will be to fix this whilst keeping the precise, mechanical and quantitative spirit of science alive.
The reason why scientists have a natural "disdain" for sociological studies, is because they don't have these qualities. Many of the arguments aren't convincing from a critical angle, and only convincing to people who have a pre-existing bias towards certain conclusions. Yet the scientific culture of today is falling into that trap itself.
The article criticises metric-driven incentives, but it is not metrics (the general concept) that are at fault. It is the choice of metric, and the meta-analysis of this that is lacking. These choices are themselves often backed up by non-scientific vague arguments, of the similar sort that scientists often criticise other fields as depending upon.
We must certainly not conclude from these studies that quantitative analysis is itself what is at fault. I know the article doesn't explicitly say this, but it hints at it - using suggestive phrasing like "disdained sociological studies" and referring to all "metric incentives" as a single group - and it is a point I have seen made by many non-scientists. That is, using these flaws as a straw man to attack the very qualities of what has made science so successful and useful.
To improve the situation, we must reject these straw-man arguments against science, and develop better methods that are more quantitive, over a broader spectrum of what is being analysed, and that are more self-critical.
And yet, the greatest scientists in history did NOT have the stereotypical Scientist's Disdain for sociology or softer topics. For example Einstein said things as "As far as [the laws of mathematics] are certain, they do not refer to reality" and "All religions, arts and sciences are branches of the same tree."
There are plenty of brilliant scientists who are held back by their excessive rationality, or whatever you want to call it. Another great book on this topic as it relates to econometrics (as opposed to physics) is "The Romantic Economist" by Nicolson.
There's a difference between "disdain for sociology" and "disdain for the way sociology is drawing conclusions from the experiments it performs".
Sociology is very important to study. The tools we have for it are not great, in various ways. For example, one can study people and societies by reading Balzac's writing, but the number of people who can produce that sort of thing is fairly limited. We can try to do controlled experiments, but the way we do it in practice is not great. We really really need better tools here...
Indeed, psychology and sociology are much more difficult fields of study than hard sciences. Physical laws, however tricky they are, don't seem to change at all, and tend to be the same everywhere you look. Psychology is about studying a behaviour of an advanced computing system that's about as smart as the researchers themselves. Sociology is about studying how those advanced computing systems interact with each other at scale. It's insanely difficult, and that's why it's so hard to even come up with an experimental setup that makes some sort of sense. Not to mention ethical issues (many experiments would be so much simpler if you could disregard the well-being of the test subjects).
So yeah, personally, I have utmost respect for the complexities involved in sociology - while at the same time I absolutely hate all the bullshit that's being done because doing actual research feels too hard.
Just who do you suppose was held back by "excessive rationality"? I'd suggest that many many more people in the world are held back by "not enough rationality".
There is no stereotypical "scientists' disdain" for those topics. It is vital to understand history, social sciences, economics, politics, and the arts. What is disdained is the way these topics are, in the present world, not pursued along more rigorous lines.
Too many people like to look good by producing grand-sounding theories about these topics, without putting their ideas through a more rigorous process to check whether they are actually true or not. Then we get a cohort of followers who believe these theories as if they were "as true" as mathematical theorems, but they are not. This wastes everyone's time, and worse.
One can certainly picture an overly-specialised scientist who doesn't know anything other than the specific field they chose - I guess this is what you mean by "excessive rationality" - but I don't see many cases of this actually existing. What I see in far greater numbers is the problem I mentioned in the previous few paragraphs, as well as people using the "excessively rational" image as a straw-man argument to attack scientists, to distract the world from their own flaws, of "not enough rationality".
I agree with everything except your last paragraph. You're mostly on track with what I'm talking about. There are plenty of people who throw a bunch of bullshit into a "grand-sounding theory." And on the other hand, just because something isn't backed by a mathematical theorem, doesn't mean it isn't true, or useful!
Agreed that unproven does not imply false, and there is a tendency for people purporting to be "skeptics" to make this fallacy. There's even a name for it: https://en.wikipedia.org/wiki/Fallacy_fallacy
However, it's too easy for bullshit-artists to hide behind this, as a way to deflect criticism of the theories that they're proposing, or the general area of work that they've chosen to pursue.
We're all human so it's perfectly reasonable to believe in something that's unproven - but then be honest about it, and explore the topic from a critical viewpoint without getting personally defensive. Unfortunately I see many more cases of this, than instances of the fallacy-fallacy.
Might be just another Eselei of Einstein to say that, init? Sure he can go and say that, and the former is a widely held believe, but he got nothing on pythagoreans, does he? If numbers were surreal, what are we talking about then? If they are just in our imagination, what isn't? Where did he say that? Also, a tree has more than branches. Religion is like a rotten root or some other evil imagery, if you will.
Edit: maybe he was a nihilist, I'd tolerate nil as the only non quantitative number. Sorry for the rant, but that's what you get for anecdotal evidence.
> whilst keeping the precise, mechanical and quantitative spirit of science alive
It's not obvious to me that measuring scientists or research quality has anything to do with the "spirit of science." Just because science involves very carefully measuring the objects of study, it doesn't follow that science is best served by trying to meta-measure that process. And there is evidence that it can be harmful.
So how do you ensure that sciencists stay careful and try to be objective, how do you have any objectivity in allocation of funds etc. without some metric? I hate the publish or perish system, but that is due to wrong metrics. If (purely pulling this out of my as, probably a horrible idea) a flimsy study counted negatively to your index and a faulty one outright wrecked it (with complete ruin if you do not redo/retract), that would probably skew the metric optimisation to more carefully thought out, substantial research
I'm not sure that scientists need to be super objective for science to work. The history of science is so littered with grudges, feuds, and grit in spite of evidence that it's hard to see the ground of objective rationality that many assert is behind it.
And pandemonium models [1] of computation show that the right rules of cummunication can produce astonishing order from chatic self interest. If scientst were forced to pre publish their methods and share all data -- science would probably work better. This is true even if this change had no effect on their behavior, or even if they made every effort to game the system.
This imrpovement to science doesn't ask a committee to define or measure anything ineffable, and it doesn't expect individuals to change thier behavior. It just changes the rule of interaction in a way that better favors the systems epistomological progress.
Objective scientists are a good thing, but are not - and should never be - considered an essencial component of the system. Any system that assumes honesty and fair play in its participants is doomed to failure. What science was aiming for is getting the true results from the aggregate - thus peer reviews, replication, etc.
Of course the more noise you have, the less efficient the system is, so it's good to incentivize people to do honest, objective work - thus pre-publishing / pre-registering, sharing data and algorithms, etc. are all good and important goals. But so should be changing the metric affecting the aggregate - like making sure scientists are actually incentivized to replicate previous work.
This was figured out long ago. Before people should believe you know what you are talking about, you need to make precise and accurate predictions about the future (comet should reappear on x date) or perform engineering feats (pull a ship out of the water using only your own manpower).
People just don't like this real solution because it means many (really, most) of the wild speculations they hold as dear dogma (and have been calling science) will need to be reassessed using the correct metric. There will undoubtedly be a lot of egg on many faces.
> how do you have any objectivity in allocation of funds etc. without some metric?
Our need for "objectivity" here is to a large extent driven by the centralization of the funding allocation that http://mcadams.posc.mu.edu/ike.htm warned about and has since come to pass. If funding were more decentralized, lack of objectivity in any particular funding source would be _much_ less of a problem.
Specifically: The prospect of domination of the nation's scholars by Federal employment, project allocations, and the power of money is ever present – and is gravely to be regarded.
It wasn't obvious to people living thousands of years ago that science would be the way to move civilisation forward, but here we are. And just because it "doesn't follow" that these methods would improve science when applied to itself, doesn't mean that the latter idea is false either.
We follow these principles out of observing history. But part of it is an act of faith, I'll give you that. What sort of "evidence" are you referring to, that it would be harmful?
I don't think the problem of the article is with quantitative methods per se. Rather, I read it as a gripe about the specific metrics employed, frex, the h-index.
A problem is that any such metric is only ever going to be a proxy for some assumed real value, and such a proxy is always going to be controversial, particularly if it's used to control access to funding.
Which I think at the end of the day is the big issue here. All this soul-searching is just a big attempt to decide what we spend money on. Who gets to be a scientist, and who gets to become a person with a lot of useless knowledge and an incurable sense of failure.
It could be read either way - it doesn't talk too much about differences between metrics, and groups them as near-equivalent. Anyway the exact intention of the author/article is not so important. My point is to say that one way that you could (and I'm sure some will) interpret it, is not good.
Certainly we need to look at our social reward/incentive systems from an adversarial point of view, in science, economics, politics and elsewhere. Game theory is useful for that, and it can help us develop systems that are less-easily susceptible to the flaws that the article mentions.
Basically, science is incapable of introspection, at least at this moment in time. But a good scientist will make good science out of any social subject.
Academic scientific activity is still mostly a social activity because it is done through the communities and the social interaction between scientists and institutions. Specific problems may be solved by specific brains or groups of brains, but the output and the collaboration is all social. The rewards and incentives are all social systems. And at the end of the day, any time we write a blog or post a comment, it's all social. This is social.
But the disdain for social sciences by non-social scientists is discrimination only warranted to the extent they can avoid social and professional interactions with them, and feel immune from any criticism for their bigotry. The moment a scientist wishes to study the social aspects of science itself, they have no choice but to accept social science as a science.
But this is okay because reputation has no consequence in problem solving, because reputation is a social device, and not a solvent. The track record of a field, a department of an institution, the history of publications, or the publications of an individual social scientist are all largely irrelevant when faced with your own research targets. The only thing relevant is the available data and the premises chosen for any model. There is no need to criticize the scientific integrity of the work of others if you can do better. Do it yourself if you have to.
I think quantitative analysis is at fault. The thing is, mathematics is a language and I can as easily produce a model that encapsulates the lies we tell ourselves in Newspapers, as I can produce a model based on Hitler's Mein Kampf.
Actually I thought yesterday about how to misconstruct a human detector for photos, one possibility would be to have face detection and then measure the average pixel color of the face. That would get me probably quite nice detection rates if I test on a dataset that has a racial markup as the average CS lecture. The thing here is, that I push the false negatives purposely to people of color. It is not hard then to invent a story about how 'mathematics proves' that black people are more similar to apes than white people.
We currently just don't have anything to ground models of human interaction in,^1 and what is worse most people treat models just as previous generations treated prophecies. They don't understand math and believe in it because they don't understand it.
^1 I am actually not saying that models are worthless, I am saying that most complex models only show what the author wants them to show and they can be as easily manipulated as an essay.
The flaws you're pointing out, are flaws with the non-quantitative arguments used to justify choosing specific flawed quantitative models. This does not mean quantitative analysis as a whole, is flawed.
Someone goes and builds a bridge. The bridge collapses because they applied mathematics in a way to maximise profit, with a safety margin just beyond what they can get away with by existing regulations. 5 million other bridge engineers also do the same, so that eventually bridge engineers get a bad rep, and people think bridges are awful awful things. This doesn't mean mathematics or quantitative analysis is at fault - and in fact these tools can be used to examine the incentives and other social dynamics that led to these situations, to be able to fix them more effectively in the future.
That is what I meant with 'grounding.' I say that mathematics can be a useful tool, however contrast this with the situation in physics, we have comprehensive theories and we have the situation were we can be very certain that all future theories contain something that can be identified with atoms and even more, if one uses a mathematical theorem then we can be confident that there is something in nature that corresponds to the results. We don't have anything similar in all other intellectual endeavors, but people treat physical and non-physical models similar.
We have a very good theory of gravity, general relativity, and actually for anything that was ever used to build anything including the Voyager probes, Newtonian gravity is enough. In a sense we even have a safety net.
As for dark matter, the reason we don't have a good idea what dark matter is, is that is has almost no effect on normal matter. So the reason why we can ignore dark matter everywhere except at the frontiers of physics is, that dark matter only matters when one spends awesome resources to try to detect it.
Dark matter isn't "ignored", it is unknown. Not because nobody cares but because, as the parent said, the current scientific paradigm is incomplete and by extension incorrect.
The problem is that we want "quality science" (whatever that means!), but we don't know how to quantify that, or indeed really define it. So a quantitative metric will necessarily be measuring some sort of proxy or set of proxies for "quality", and then you will get people optimizing those proxies, not "quality". To the extent that those proxies miss something important, it will be underinvested in.
Unfortunately, I don't have a better proposal, perhaps short of taking the warnings in http://mcadams.posc.mu.edu/ike.htm to heart (the ones that are NOT about the military-industrial complex). Doing that might change the general funding climate sufficiently that the need for deciding "quality" like we do now may simply become less critical.
>The reason why scientists have a natural "disdain" for sociological studies, is because they don't have these qualities. Many of the arguments aren't convincing from a critical angle, and only convincing to people who have a pre-existing bias towards certain conclusions.
> It is the choice of metric, and the meta-analysis of this that is lacking.
They need something like a "Journal of Meta-Research", which would research the best ways to perform research. You'd think something like that would get a lot of funding...
Outside of fields with direct penalties for invalid science (think aviation, nuclear reactors, sinking boats, collapsing bridges), we're just relying on the integrity and honor of researchers. On an individual level it's much more effective to cut corners and publish trendy research than it is to make valid science.
The situation is only made worse by the fact that universities reward this; because they too are rewarded more for high-profile research than they are for scientific validity.
My guess is that most studies in softer sciences are simply erroneous. Many social sciences have already been shown to be unreliable by the replication crisis.
My personal experience comes from reviewing hundreds of published studies that evaluated the applied effectiveness of machine learning models. Half had significant statistical errors.
The stats errors are so common and easy to correct these days that there is a statscheck (ala spellcheck) program (can't find the link, sorry) out of the Belgium that trawls through journals and checks the stats. Mind you, these are peer reviewed and published articles. Still, about half have erroneous stats, sometimes ones that invalidate the paper. Yeah, we all screw up, but it seems like a lot of the screw ups happen much too often.
As for a place that has common stats errors and how to fix them? Well, just about every single intro to stats book, the entire R language, just about any library for just about any language, google, etc. The issue is not that the folks are making honest mistakes, that happens, it's that the system is perverse and incentivizes them.
I agree that governments should stop using metrics to evaluate science in this way, and instead qualitatively review research output.
It's obvious that citation counts are open to manipulation (e.g. through self-citation and encouraging citations from others via various mechanisms) but there's also a very large factor that can't be easily quantified: an enormous amount of good will has been lost in the system.
Many scientists who are interested mainly in satisfying their curiosity and contributing to society resent the top-down mismanagement of science in the UK and US. They see a system where those who play political games do well, but the smartest, most dedicated, and passionate researchers are often sidelined or ignored because they do not spend the requisite time playing the political system to artificially boost their reputations. Many times I've seen very talented academics with huge potential leaving academia as they were passed up for promotion, left on a temporary contract indefinitely, or neglected in other ways.
Strangely enough, I think that if you don't have pressure to climb the ladder (e.g. a family), then the current system offers great opportunities to do outstanding science. If you choose to stay low on the ladder and spend the vast majority of your time actually focusing on research problems, you can get far more actual research done than those chasing promotion or esteem. And because so many people are focused on their citation count, or some political game, the competition isn't as strong as it should be - you can make real advances if you quietly focus and leave the politicians to fight each other.
Noting that epistemology is concerned with the study of (the concept of) knowledge, and may (in principle) go beyond what’s involved in the purely scientific acquisition of such (if some non-scientific method for the acquisition of knowledge is to be accepted).
“Philosophy of science,” already mentioned, is probably a better fit. Once you go meta on science, after all, you are (arguably) not doing science anymore, so there is no harm in naming it a kind of philosophy.
This is presented as if it is a new problem. It stems back to the 1940s with the adoption of NHST (with its arbitrary "statistical significance" metric) first by educational researchers, then by psychologists, followed by spreading like a cancer throughout the research community from there.
Things have only been getting worse and worse as the "old guard" in each field retires/dies leaving behind only people trained to think rejecting a strawman hypothesis according to an arbitrary metric, then concluding something about your hypothesis, counts as science.
The irony is that this is published in Nature, which undoubtedly is a huge contributor to questionable research practices by publishing spectacular findings and ignoring negative results and replications.
I think it's more helpful to blame the incentives and the individual actors. Nature isn't malicious or powerful enough to break science all on it's own. It didn't write the rules, it just plays by them.
Like most problems it comes from too many people chasing too few dollars. The ultimate cause was the construction of a system that trains too many people for the money avalible to employ them.
The current system was deliberately constructed by politicians after WWII when we had more money (and demand for science) than scientists. They set out to create more scientists by designing a system where each scientist trained 10 to 100s of additional scientists (PhDs). This worked great until around the mid 1970s when the supply of scientists finally caught up with the money. Since then the problem has been getting worse.
Here are some I can list: P-values, the h-index (article discusses), the funding crunches and (US specific) the cyclic nature of NIH funding, the massively skewed incentives of publish or perish, the entire idea of trying to assign value to discoveries (article discusses), the 2-body problem of family life and science (at least the Germans expect you to have no life and are clear on that), the work-life non-balance, the relatively very poor pay, the hyper-competition and sabotage (in some fields, labs, or universities), the total ignorance of doctors/politicians that take your work and apply it in appalling ways, the entire racket of scientific publishing, etc.
As always, we should point out to newcomers to this discussion of one of the most read scientific paper EVER by John P. Ioannidis "Why Most Published Research Findings Are False" [0]. If you don't have access and don't want to use Sci-Hub or ICanHazPDF, here is a youtube of John discussing his work [1]. You can use the google to really jump into a rabbit hole here. Essentially, flip a coin, heads the paper is right, tails the paper is wrong, it really is that bad these days.
Ok, so lets just scrap the whole thing then, right?
So, yeah, science is a mess right now. But if we scientists don't step up and make it not a mess, those scummy money-grubbing scammer dirtbags and their ilk are more than happy to take up the slack. These fucking asshats are going to kill people like you poor uninformed cousins and their kids and take all their money because they think homeopathy is right and the moon landing was faked.
So, you young scientist that looks at this pile of garbage that is modern academia, do not despair! Fight the good fight! Yes, you may end up shirtless and ridiculed by the older scientists. But you have to do what you think is right because the rest of this world is depending upon you! Even for that tiny little bullcrap paper you are getting out just to graduate, that matters too. Be in the mud, be in the arena, fight for truth!
(It is a wee bit disturbing that first link isn't to PLoS)
Post all your papers as preprints. Don't bottom for some grumpy old PI. Academia is not scholarship -- the latter is frowned upon within the former, as it takes longer than simply churning out some useless publoid garbage result.
Be like Faraday, Newton, Varmus and Eisen... not like Bem.
>"These fucking asshats are going to kill people like you poor uninformed cousins and their kids and take all their money"
They have already taken over healthcare and passed laws to force you to pay for their "help". To get you started: people are quitting cancer reproducibility projects out of disgust for the low quality, and just trying to figure out wtf was done to generate the data is draining all the funds before they can even attempt replication:
"Early on, Begley, who had raised some of the initial objections about irreproducible papers, became disenchanted. He says some of the papers chosen have such serious flaws, such as a lack of appropriate controls, that attempting to replicate them is “a complete waste of time.” He stepped down from the project's advisory board last year.
Amassing all the information needed to replicate an experiment and even figure out how many animals to use proved “more complex and time-consuming than we ever imagined,” Iorns says. Principal investigators had to dig up notebooks and raw data files and track down long-gone postdocs and graduate students, and the project became mired in working out material transfer agreements with universities to share plasmids, cell lines, and mice.
[...]
ALTHOUGH ERRINGTON SAYS many labs have been “excited” and happy to participate, that is not what Science learned in interviews with about one-fourth of the principal investigators on the 50 papers. Many say the project has been a significant intrusion on their lab's time—typically 20, 30, or more emails over many months and the equivalent of up to 2 weeks of full-time work by a graduate student to fill in protocol details and get information from collaborators. Errington concedes that a few groups have balked and stopped communicating, at least temporarily."
Also, it seems 4/5 doctors will tell you there is 95% probability you have a disease when it is actually 98% chance you do not have it, 40 years of education reform has not affected this at all (admittedly, this needs to be repeated on a larger scale, but from personal experience I have no doubt it will hold):
"Nearly 40 years ago the New England Journal of Medicine published a short survey of doctors’ understanding of the results of diagnostic tests.1 The participants, all doctors or medical students at Harvard teaching hospitals, were asked, “If a test to detect a disease whose prevalence is 1/1000 has a false positive rate of 5%, what is the chance that a person found to have a positive result actually has the disease, assuming that you know nothing else about the person’s symptoms or signs?” This wasn’t a very difficult question, which made the results all the more shocking. Fewer than a fifth of participants gave the correct answer, and most thought that the hypothetical patient had a 95% chance of having the disease.
Of course, this was a long time ago, and medical curriculums now contain much more in the way of statistics and probabilistic reasoning. You might expect that if the exercise were repeated today almost everyone would give the right answer. But you’d be wrong. Earlier this year a similar study was carried out, also in hospitals in the Boston area of Massachusetts, and the results were no better.2 Most doctors who were asked exactly the same question thought that the patient had a 95% chance of having the disease."
http://www.bmj.com/content/349/bmj.g5619
Medical errors were "unintentionally" left out as a possible official cause of death, then when people actually estimate this, it is a leading cause:
"In 1949, Makary says, the U.S. adopted an international form that used International Classification of Diseases billing codes to tally causes of death...medical errors were unintentionally excluded from national health statistics...based on a total of 35,416,020 hospitalizations, 251,454 deaths stemmed from a medical error, which the researchers say now translates to 9.5 percent of all deaths each year in the U.S...According to the CDC, in 2013, 611,105 people died of heart disease, 584,881 died of cancer, and 149,205 died of chronic respiratory disease—the top three causes of death in the U.S. The newly calculated figure for medical errors puts this cause of death behind cancer but ahead of respiratory disease."
This is the guy that published the irreproducible results paper whose results were (of course) irreproducible (since it did not name any). I.e., "trust me on this", the opposite of science.
Note above that you can pay for more of his sage wisdom, thanks to the generous NatureMacSpringer megaconglomco.
Post everything as a preprint. Submit your trial results to OpenTrials or OpenFDA. Quit killing patients. Maybe work in pediatrics so that, right or wrong, you still won't make any money, so you might as well not kill any excess kids.
I'm not bitter or anything, after participating in clinical trials and analysis for over a decade...
To be honest it's really only adult trials that are fucked. Well, those and rare diseases with a noisy yet ignorant "advocacy" lobby ("We won't have to pay if the drug doesn't work, the magical Insurance Fairy will pay for it!"). I understand the latter (some hope is better than none, if it's your kid that's sick) but as for the former... ick.
Take away the profit motive and it's astounding how much more reliable the trials get. Of course reviewers whine about how they're often equivocal or negative, because the thought of the literature being horribly biased either
a) has never crossed their mind (too busy with protocols),
or
b) is precisely what they want, for "showing progress".
So in order to keep up the appearance of positive results and keep the gravy train flowing, the trials that get published tend to be the ones that "show" "progress".
Mind you, these are clinical trials with preregistered endpoints and protocols. Pretty much the entirety of experimental science outside of physics is much worse. So what I'm telling you is that this is the BEST-CASE SCENARIO for an awful lot of science.
I'm not going to write up "my story" beyond this, because it's the same as most everyone else's story in the field. Killing humans for profit is not my idea of a good time, even if adult trials tend to pay better than the Army.
YMMV. I may be jaded, but I still believe there are populations that, through no fault of their own, end up with few or no options. There is room in this world for integrity -- but get (real) money involved and the bad money drives out the good. The linked paper is dead on.
Science hasn't existed forever, and Foucault already smashed to bits of any sort of "evolution with direction" account of science in The Order of Things.
The most disturbing thing appears to be that many folks aren't aware of Foucault's work.
The reason why scientists have a natural "disdain" for sociological studies, is because they don't have these qualities. Many of the arguments aren't convincing from a critical angle, and only convincing to people who have a pre-existing bias towards certain conclusions. Yet the scientific culture of today is falling into that trap itself.
The article criticises metric-driven incentives, but it is not metrics (the general concept) that are at fault. It is the choice of metric, and the meta-analysis of this that is lacking. These choices are themselves often backed up by non-scientific vague arguments, of the similar sort that scientists often criticise other fields as depending upon.
We must certainly not conclude from these studies that quantitative analysis is itself what is at fault. I know the article doesn't explicitly say this, but it hints at it - using suggestive phrasing like "disdained sociological studies" and referring to all "metric incentives" as a single group - and it is a point I have seen made by many non-scientists. That is, using these flaws as a straw man to attack the very qualities of what has made science so successful and useful.
To improve the situation, we must reject these straw-man arguments against science, and develop better methods that are more quantitive, over a broader spectrum of what is being analysed, and that are more self-critical.