Hacker News new | past | comments | ask | show | jobs | submit login

I know there is someone affiliated with the SAGE group who is a contributor to HN from previous comments, which I seem to be unable to find now. I wonder if they can share any details on how this was discovered, just something seeming fishy to an editor, or some kind of fraud detection algorithm?



I'm on the board. I actually don't have any more information about this right now, but I'll be following up to see what I can learn. I imagine that similar to voting ring detection on a site like HN, the specific details about what tipped us off are probably going to remain semi-private. But the issue of fraud detection in academia is incredibly interesting and a good example of where publishers (both via software and human intervention) are playing an important role. I'll see if we can share any more of the technical details publicly.


It seems safe to assume most "known" ring detection systems being used are private/secret since their goal is to prevent abuse and revealing their details might thwart their effectiveness. None the less, at least some ring detection work is being done publicly, and I've recently been trying to learn more about the methods being used. The following should be helpful to anyone who has to fight this sort of abuse:

"Doppelganger Finder: Taking Stylometry To The Underground" https://www.eecs.berkeley.edu/~sa499/papers/oakland2014-unde...

Code: Find multiple accounts (doppelgangers) of a user https://github.com/sheetal57/doppelganger-finder

https://psal.cs.drexel.edu/index.php/JStylo-Anonymouth https://github.com/psal/jstylo


Sounds something akin to "security through obscurity." The perceived advantages of proprietary secret security measures are often dwarfed by the benefits of opening up a system to careful scrutiny.


Knowing that "security through obscurity" is a bad thing is a fine heuristic, especially for beginners.

Thinking "security through obscurity" is automatically and always bad is incorrect.

The people running this detection software don't do it to satisfy your intellectual curiosity. They do it because they are attempting to prevent attacks on their systems.


It depends. Complexity and relatively low interest in a project may mean that only a few truly motivated individuals would even look at the code - and in this case the people most motivated are those who want to abuse the system.


Just a quick followup to this. This particular detection was triggered by a person raising a red flag that something seemed fishy with a particular email from a reviewer, which turned out to be from one of the illegitimate accounts we eventually detected.

We also believe this particular fraud would have been identified regardless because we had a manual review scheduled for the most-cited articles of some of our journals, which would have unearthed this same thing.

In either case, fraud detection like this seems to be a fairly manual process for us at this point. Our short-term solution to ensure this doesn't happen again for this particular journal is to throw more editors at the problem. I'm unaware of what automated systems we currently have in place or plan to have in place, but that's something I'll be following up on.

If anyone happens to be working on a startup for automated fraud detection in academia I'm sure we'd love to talk :) As always, feel free to contact me directly, details in my HN profile.


I guess it is Doug who has already responded, and the thread you are referring is this- https://news.ycombinator.com/item?id=7641398 ?


I would bet a lot of places that traditionally ran on trust/reputation are vulnerable to bad actors, at least once. It's sad more than anything else...




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: