Hacker News new | past | comments | ask | show | jobs | submit login

Because of bibliometrics.

Academics are judged on stupid factors depending on the quantity of publications and citations rather than on the quality of their research.

The idea of bibliometrics is that you can compare on a single dimension individuals who do not work on the same subject, who use different methods, who do it in different communities which have different publication habits, etc. Of course this is completely stupid.

It's like saying that a cyclist who has participated in 10 Tour de France is better than a judoka who has participated to only 3 Olympic games. Only here you also give a job and funding to do it appropriately to the cyclist and say to the judoka to go f* himself until he participates in more Tour de France.

Of course, the thing is that you can't even have good criteria, because necessarily if you give criteria they become the goal, and instead of doing good research and publishing what is necessary when and where it makes sense, academics are forced to do research and have a publication policy that satisfy the arbitrary criteria. And even if the unlikely case where the criteria match what's better, it is impossible for it to be the case in the so many different fields of research.

The only good solution would be to judge academics by the two or three most relevant publications they have on the matter of the grant or job. That would require reading the publications, and if possible the report of the peer-review process (which would be enabled by open peer review).




Here are some examples of bibliometrics:

- http://en.wikipedia.org/wiki/H-index The big one. Used to judge researchers.

- http://en.wikipedia.org/wiki/G-index Another one for researchers. I haven't really come in contact with this one very much.

- http://en.wikipedia.org/wiki/I10-index Google's own little algorithm.

- http://en.wikipedia.org/wiki/Impact_factor Older. Used to judge journals

The first and last articles have fittingly long "criticism" sections. A key point to make here is that these bibliometrics succeed at their basic goal (providing one platform by which to judge all academics/publications) at the price of misaligning incentives.

Resultantly: you do get to judge all academics on an even platform, but that platform is a weighted average of how well they do research and how well they play the politics/popularity game, scaled by the popularity of their field (good luck finding me a researcher in theoretical plasma physics with an h-index over 30 -- I'm not sure there are even 30 theoretical plasma physicists in the US.).

On top of this, sites like ResearchGate (like LinkedIn, but for researchers) give people their own score, which is pretty opaque, and display it in bright green next to everyone's profile picture. It introduces a lot of competition to a field that doesn't really need it.


Academia really is a massive wank


It is a dire situation, but the fact that the researcher(s) went to great effort to get into that particular journal shows that at least the perceived quality of the journal is still being used as a rough metric for the quality of the research. So quality is still a factor, even if quantity has far too much weight.

If they had simply wanted to get published, then it seems like there are plenty of journals that will accept almost anything (they get paid on publication). More than 150 out of 300 open access journals accepted an obvious spoof paper last year[1].

[1]http://www.sciencemag.org/content/342/6154/60.full


Of course, the "impact factor" of the journal also counts, however it is gamed too. For instance there are publishers which artificially increase the impact factors of their journals by requesting that authors of accepted papers cite a few articles recently published in their journal.

Now, I would like to come back to the "article" you cite. Let me get this straight from the beginning: this text (by John Bohannon) is a piece of shit. Let me explain why:

The article aim to compare open access journals and closed access ones. The method that is used for that purpose is quite remarkable. It consists in sending a paper of very poor quality with wrong results in it to many gold open access journals (this means that there are what is called "Author Processing Charge": the authors pay the journal to publish), and to see how many of those will accept the paper. What happens is that a bit more than half of them accept the paper with, of course, not a single sign of peer-review happening. The author then conclude that open access journals are for the major part of poor quality, implying that they are worse than "traditional journals". This conclusion is eminently ridiculous.

First, if you want to compare open access and closed access journals, you also need to actually test the closed access journals. You can't just assume that they are good, especially when it's implied and not a single reason to think so is given (trust me it could be hard in many cases).

Second, the method makes no sense. No researchers sends his or her paper to unknown journals. When a paper is submitted to a journal, the journal is chosen depending on at least two criteria: on one side its prestige, because the more prestigious, the more read, and the goal of an academics is of course to be read by the other researchers in his domain; on the other side, for the seriousness of the journal (which is tacitly known in its academic community), because the peer-review process is very important for the authors (no academics like to have a paper published and then discover that there is a mistake in it). Amusingly, this is something that scientific news magazine such as Nature and Science are not that good at.

Third, the real conclusion of the poorly carried experiment that the author makes is that the bibliometrics pressure on researcher (“publish or perish”) coupled to their desirable and natural wish to be more widely read, and thus to publish in open access journals, gave birth to an unhealthy publication business that is harmful to research and science (I recall that the author only experimented with authors-pay open access journal, calling it simple "open access", as if he was, strangely, paid by publishers lobby to increase the already existing confusion between gold open access (where authors pay) and open access, which exists in many different models).

To me, the real solution to all these problem lies in Diamond Open Access and Open Peer-Review.


I'm not qualified to comment on every point you made, but it's certainly true that his methodology was not very scientific. Some publications are quoted in the article as saying that they would have reviewed the paper later, and then they would have rejected it. As he wasn't prepared to pay the publication fee, he had no way of testing this.


I recall Higgs saying that, by today standards, he would have been fired from his post.


Well I mean come on. He barely published more than ten papers! He hardly did anything at all. (except for that Higgs bosun thing; that was pretty good.)


He sure sailed a long way on that "bosun" thing.


Academics are judged on stupid factors depending on the quantity of publications and citations rather than on the quality of their research.

This reminds me of pagerank and SEO. In fact this ring is basically the equivalent of black-hat SEO's link-farms for academic journals.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: