Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's a fair point. In the ideal setting, peer review can really be a very informative and important gate. And who better to be the gatekeeper than someone who understands the context?

However, there are still big issues with how these peers perform reviews today [1].

For example, if there's a scientifically arbitrary cutoff (e.g., the 25% acceptance rate at top conferences), reviewers will be mildly incentivized to reject (what they consider to be) "borderline-accept" submissions. If the scores are still "too high", the associate editors will overrule the decision of the reviewers, sometimes for completely arbitrary reasons [2].

There's also a whole number of things reviewers should look out for, but for which they neither have the time, space, tools, nor incentives to do. For example, reviewers are meant to check if the claims fit what is cited, but I can't know how many actually take the time to look at the cited content. There's also checking for plagiarism, GenAI and hallucinated content, does the evidence support the claims, how were charts generated, "novelty", etc. There are also things that reviewers shouldn't check, but that pop up occasionally [3].

However, you would be right to point out that none of this has to do with peers doing the gatekeeping, but with how the process is structured. But I'd argue that this structure is so common that it's basically synonymous with peer review. If it results in bad experiences often enough, we really need to push for the introduction of more tools and honesty into the process [4].

[1] This is based on my experience as a submitter and a reviewer. From what I see/hear online and in my community, it's not an uncommon experience, but it could be a skewed sample.

[2] See, for example: https://forum.cspaper.org/topic/140/when-acceptance-isn-t-en...

[3] Example things reviewers shouldn't check for or use as arguments: did you cite my work; did you cite a paper from the conference; can I read the diagram without glasses if I print out the PDF; do you have room to appeal if I say I can't access publicly available supplementary material; etc.

[4] Admittedly, I also don't know what would be the solution. Still, some mechanisms come to mind: open but guaranteed double-blind anonymous review; removal of arbitrary cutoffs for digital publications; (responsible, gradual) introduction of tools like LLMs and replication checks before it gets to the review stage; actually monitoring reviewers and acting on bad behavior.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: