I know there is someone affiliated with the SAGE group who is a contributor to HN from previous comments, which I seem to be unable to find now. I wonder if they can share any details on how this was discovered, just something seeming fishy to an editor, or some kind of fraud detection algorithm?
I'm on the board. I actually don't have any more information about this right now, but I'll be following up to see what I can learn. I imagine that similar to voting ring detection on a site like HN, the specific details about what tipped us off are probably going to remain semi-private. But the issue of fraud detection in academia is incredibly interesting and a good example of where publishers (both via software and human intervention) are playing an important role. I'll see if we can share any more of the technical details publicly.
It seems safe to assume most "known" ring detection systems being used
are private/secret since their goal is to prevent abuse and revealing
their details might thwart their effectiveness. None the less, at least
some ring detection work is being done publicly, and I've recently been
trying to learn more about the methods being used. The following should
be helpful to anyone who has to fight this sort of abuse:
Sounds something akin to "security through obscurity." The perceived advantages of proprietary secret security measures are often dwarfed by the benefits of opening up a system to careful scrutiny.
Knowing that "security through obscurity" is a bad thing is a fine heuristic, especially for beginners.
Thinking "security through obscurity" is automatically and always bad is incorrect.
The people running this detection software don't do it to satisfy your intellectual curiosity. They do it because they are attempting to prevent attacks on their systems.
It depends. Complexity and relatively low interest in a project may mean that only a few truly motivated individuals would even look at the code - and in this case the people most motivated are those who want to abuse the system.
Just a quick followup to this. This particular detection was triggered by a person raising a red flag that something seemed fishy with a particular email from a reviewer, which turned out to be from one of the illegitimate accounts we eventually detected.
We also believe this particular fraud would have been identified regardless because we had a manual review scheduled for the most-cited articles of some of our journals, which would have unearthed this same thing.
In either case, fraud detection like this seems to be a fairly manual process for us at this point. Our short-term solution to ensure this doesn't happen again for this particular journal is to throw more editors at the problem. I'm unaware of what automated systems we currently have in place or plan to have in place, but that's something I'll be following up on.
If anyone happens to be working on a startup for automated fraud detection in academia I'm sure we'd love to talk :) As always, feel free to contact me directly, details in my HN profile.
I would bet a lot of places that traditionally ran on trust/reputation are vulnerable to bad actors, at least once. It's sad more than anything else...
Papers coming out of China and Taiwan need to be regarded skeptically; this reminds me a lot of the fairly recent "China's academic scandal: call toll-free hotlines to get your name published" http://www.wired.co.uk/news/archive/2013-12/02/china-academi...
The particular problem that article is talking about is one in the PRC, not extending to Taiwan. As I understand it, orders came from on high (by definition the Communist Party of China) driving this.
Which is not to say the "publish or perish" problem isn't endemic, and that we can safely assume Taiwan is about as bad as any other "normal", non-cargo cult science country.
Because Taiwan is so close to the PRC, in every sense of the word, you have to worry. Has the Taiwanese government been enacting China-like policies? When researchers return to Taiwan, do they bring with them corrupt practices? Or even just beliefs in evaluating researchers by publication count? Do the revelations about Chinese papers encourage Taiwanese researchers to try their own hand at it, and normalize it? Are there any underlying cultural factors which might be the same in both countries? And so on.
> that we can safely assume Taiwan is about as bad as any other "normal", non-cargo cult science country.
I wouldn't say that. For example, there's a strikingly consistent observation that close to 100% of acupuncture experiments succeed when they are done in East Asia (which is pretty much impossible for any effect size) despite rates halving when you move out of the region; is the placebo effect that much stronger in East Asia, or does this reflect investigator bias / incentives to get positive results by any means etc?
My experiences with the business climates of Taiwan and China suggest to me that the set of "senses" you and I have associated with the word "close" must be vastly different.
The universities are set up similarly, many of China's flagship universities exist in both PRC and ROC (e.g. Qinghua). In other aspects, Taiwan is closer to Japanese culture (or so I hear, I've never stepped foot in ROC before).
While I can see the point that research from China is harder to verify in some cases, I think the statement "Papers coming out of China and Taiwan need to be regarded skeptically" goes a bit far. China has strengths in some fields, and even if research institutes in China are of lower average quality than their western counterparts, there remain some that are still of high quality. Are you forgetting that Hong Kong is part of China, and has research links to top universities in the Chinese mainland? Well Hong Kong provides the example of HKUST, a high-quality university which I would claim provides a leading example for research coming out of Asia.
In many scientific fields, initial results out of China are almost entirely ignored. It might not be fair to the honest researchers, but the amount of fraud from China is staggering. This goes far beyond unclaimed conflicts of interest.
In 2010, however, Nature had also noted rising concerns about fraud
in Chinese research, reporting that in one Chinese government
survey, a third of more than 6,000 scientific researchers at six
leading institutions admitted to plagiarism, falsification or
fabrication.
Last month a collection of scientific journals published by
Zhejiang University in Hangzhou reignited the firestorm by
publicizing results from a 20-month experiment with software that
detects plagiarism. The software, called CrossCheck, rejected nearly
a third of all submissions on suspicion that the content was pirated
from previously published research. In some cases, more than 80
percent of a paper’s content was deemed unoriginal.
"Many agencies have toll-free hotlines and customer service
representatives working ten hours a day. Some even list the titles
of papers for sale, or the names of journals with which they
supposedly collaborate, on their websites. It's a sort of
Wild West of academic publishing. You have a whole laundry list
of services on offer that many scientists would probably never
imagine to be possible."
I wonder what the thought process is like for someone who decides to be a professional fraudster. "I'm going to spend my life pretending to be a scientist." Is it because they feel there's nothing else they could transition into, so they're forced to decide between cheating or failing? Or is it a conscious decision to cheat out of greed?
What's interesting is that the sheer amount of fraud suggests that this thought process is considered at least somewhat normal by an entire society. Which societal factors could've led to this situation?
Delusion is an ugly ugly thing. The thoughts are not: 'Im going to be a fraudster', it's more like 'If I don't do this, I'll never be a tenured professor.' Also, from what little I know, China has a huge sense of 'saving face.' These services allow one to save face, if only in the short term. Also, if everyone looses face, it is an easier blow to take, as you are absolved from fault due to social pressure. If you call the emperor out on having no clothes, you make a fool of everyone, not just the leaders. That is very dangerous in a society that high prizes honor and face, as you are calling everyone out on it. It's not likely to make friends.
I think it's a culture thing. In China, it doesn't matter who you are, who you seem to be is everything.
I remember someone lying about pretty much everything he bought, sometime making it nearly 10 times more expensive than it is (I've checked afterwards), probably to make himself seem wealthy.
Same in the office, it doesn't matter if you sit there all day doing nothing productive. As long as you leave late, it will make people think that you're working hard, and that's all that matters.
Is this the case for most of China? It's just so hard to believe that most of a society's culture could actually be like that. But maybe that's just inexperience talking.
No. Remember that China is a large place. It is not as homogenous as is sometimes portrayed. Cultural differences in China occur by region, by class, by ethnic minority, by age, and by other factors. So blanket statements about China tend to be rough generalization. Bragging about spending 1000 yuan on something that actually costs 100 yuan might work in some parts of China, but spend time with a different type of people in a different part of China and you risk sounding like a pompous jerk. Which is also true of a lot of western societies, come to think of it.
Bear in mind that the entire machinery of government, law enforcement and defence is a political monopoly with only the veneer of a pretense to rule in the interests of the people. Cynical manipulation of the system for personal gain is completely normalised. I speak from extensive personal experience.
That's not to say that everyone in China is practicing the worst excesses, but even the most conciencious and moral people in China are put in a position where they have to manipulate the system just to get anywhere or pretty much do anything, even for basic licenses and permits. Everybody is on the same slippery slope whether they like it or not. It's just a question of how far you're willing to slide.
It's a non sequitar, in any case. Someone is butthurt because there are actual differences between east and west that aren't pretty.
I'm not saying the west doesn't have it's share, just that:
A) they are different shames and
B) there's no magical rule which makes all cultures equally util. Some places/people are inevitably worse than others.
Thanks a lot! Not all if us publishing out of the PRC are corrupt, and any bias by PC members based on country of origin would be unethical and probably racist!
Your response is the natural instinct, but it's ultimately misplaced. It is completely rational for people to notice that corruption is highly correlated to certain locations in the world, and you're not going to be able to just assert your way into making that false. The correct response is to make it irrational by making it not true, and working to root out the corruption in question.
It's hard. I don't even have words to describe how hard that is. But it does at least stand a chance of working.
(This seems to be ground deeply into the human psyche. In general, a group of any kind is best off in the long term if the group polices itself for misconduct more aggressively than any outsider could. But the human instinct is to band together, assist each other to various degrees in covering over problems (often not even deliberately per se, but as a side effect of the system's structure), and band together to attack anyone external who notices anything amiss. This, alas, works in the short term quite well, but is death in the longterm.)
> It is completely rational for people to notice that corruption is highly correlated to certain locations in the world
What you just 'notice' is anecdotal and biased by, um, your biases. It's not a correlation until you actually demonstrate it in a validly constructed analysis.
Of course, the trick is that legitimate attempts at validly constructed analyses are also effected by biases of the analyzers. When your conclusion is something that 'everyone knows' (but not neccesarily based on legit analyzed evidence), I think it's especially at risk of confirmation bias (when you try to muster the legit analyzed evidence).
It would be very interesting to find historical examples in the US of something that, for a while, scientists simply believed in, and believed that valid objective scientific experiments have shown to be so -- but then later the scientific community came to the opposite consensus. (ESP in the early 20th century might be one example, believe it or not. Look it up!) Science isn't quite as foolproof as one might like or assume.
Yeah, good point, I don't know why you got downvoted -- at one point, that was consensus science -- and there was all sorts of scientific research, the overwhelming majority of scientific evidence, that people believed demonstrated it.
At the moment, consensus science is that there isn't even such a valid object of inquiry as 'race', so you can't even make claims about it. (Although this is changing in interesting ways too, post-genomics)
Well, there is corruption here and many of the papers I personally read from China are not very good. That being said, MSR-Asia is still MSR even if we are located in China, and we have very high standards. I would also suggest that quality can come from China, its a numbers game: we have a lot of people in this region, and there is bound to be some gems that you might encounter if you are objective. Casting a stereotype over a whole region just isn't useful in that regards.
All I'm asking for is fairness. Too many of my colleagues have to disguise where they are from to even get by double blind review (there is a bias against China-based papers, and so we take careful care to extra-anonymyze our papers). This is ridiculous and immoral.
>Your response is the natural instinct, but it's ultimately misplaced. It is completely rational for people to notice that corruption is highly correlated to certain locations in the world, and you're not going to be able to just assert your way into making that false. The correct response is to make it irrational by making it not true, and working to root out the corruption in question.
That's like saying "blacks are lazy" and when someone complains you tell them that it's normal for people to make such correlations and that to fix this he has to work on "making it not true".
You got it backwards. The burden of proof is on you.
You have to prove what you state with regards to corruption is true. Not just speak like it's a given, and demand of them to fix it.
>This seems to be ground deeply into the human psyche. (...) the human instinct is to band together, assist each other to various degrees in covering over problems (often not even deliberately per se, but as a side effect of the system's structure), and band together to attack anyone external who notices anything amiss.
This is a generalization that can be said for all groups. Not a proof (or even an indication) that any charge of "something amiss" from somebody external is right.
I have friends that studied in the UK (including Cambridge) and the US. My anecdotal evidence is that tons of papers in Comp-Sci in those countries are also BS, to keep the churn rate up). And academics return favors to other academics all the time, with regards to favorable reviews, etc.
(Btw, I'm not Chinese, so the pop psychology 101 above doesn't apply).
Without answering your question -- I'm not sure of the answer -- I think the more important question is:
How exactly do you 'know' that (hypothetically) 9 out of 10 authors from Laos are corrupt?
How have you demonstrated/discovered this? (hypothetically, I realize this is just a hypothetical example). How sure are you that your findings weren't accidentally effected by biases, effecting your research design or approach to accidentally confirm your biases?
When I plotted percentage of debunked papers by country of origin, it was clear Elbonian papers are debunked 3x as often as anywhere else
With that kind of a wide net, I would think the main sampling errors would be things like "Only studied physics papers" or "forgot to gather data from Journal Y". The findings could be impacted, certainly, but hopefully still trustworthy enough to be actionable?
Discrimination is considered unethical when it is baseless.
When it has merit, it is accepted as ethical.
It is ethical to deny a loan to people with a terrible credit score, because there is very good reason to suspect people with a terrible credit score will default. It's still discrimination, but it is ethical because it is based in fact.
Discrimination can have useful purpose. You can't expect people to expose themselves to a 9 out of 10 risk of being exploited by corrupt authors "in the name of equality".
You basically stated a definition of racism and then asked if that was racism. It doesn't really matter if evidence is involved or not. Racism is just stereotyping based on ethnicity or culture; as soon as you've generalized for good or bad, it is racism (yes, racial profiling is racism also).
Ok, I'm not interested in arguing what is or isn't racism. So let's assume you're right. Ok, I'm a terrible racist of the worst kind.
But what's your alternative? Is racism such a Great Evil we are willing to risk corruption of the entire global scientific knowledge by not applying extra scrutiny to "high-risk" papers? We should allow fraudulent or corrupt papers to freely pollute the community, just to make sure everyone feels like they are being treated equally?
Frankly I care a lot more about good science than being politically correct. Besides, if your science is sound, why would you be upset if your paper undergoes extra scrutiny? Isn't that why you publish it, to be reviewed?
Please note that I'm not suggesting "all PRC scientists should be excommunicated from the scientific community" or anything like that. But trust has to be earned.
> But what's your alternative? Is racism such a Great Evil we are willing to risk corruption of the entire global scientific knowledge by not applying extra scrutiny to "high-risk" papers? We should allow fraudulent or corrupt papers to freely pollute the community, just to make sure everyone feels like they are being treated equally?
If you told me that "one out of three black American males will go to prison in their lifetime" and so, accordingly, you decided to be more suspicious of all of them, then you would receive a plethora of downvotes and would be (rightfully so) tagged as a racist. But since the generalization you made is about Chinese, so it is somehow more socially acceptable to say that you will treat their papers differently when you review them? Perhaps, but that is not objective and has more to do with the culture of the hackernews community.
> Frankly I care a lot more about good science than being politically correct. Besides, if your science is sound, why would you be upset if your paper undergoes extra scrutiny? Isn't that why you publish it, to be reviewed?
Paper reviewing is already very subjective, to add racism on top of that is just too much. Reviewers usually make up their minds about a paper in around 5 minutes, and then that bias clouds how they read the rest of the paper. My papers are usually immune to this because my name sounds "white," but my colleagues (who I often co-author with) receive no such benefit.
I don't know. Are 9 out of 10 authors from Laos really known to be corrupt, or was it just an example used because we consider Laos a conventient place to pin third world stereotypes and first world snoberry on?
If 9 out of 10 authors from Scottsdale, Arizona are known to be corrupt, is it unethical and racist to be biased against authors from Scottsdale, Arizona?
It was an example, and I picked Laos because it is small. People get defensive if I name their country, instead of thinking rationally, and because Laos is small most people reading my comment will not be from Laos.
There is a large Laotian community in the bay area, many in the computer industry and would be in the audience of hacker news. You should have picked Bhutan or maybe New Zealand, it's hard to offend kiwis (they are so laid back).
It sucks, but it's like someone trying to use a credit card out of Nigeria. Even if you're honest, there's a massive problem localized to your geographic area. It's not your fault.
Just randomly looked up one of the publications. It's not my area, so I can't talk about the meat of the paper itself, but what does the discussion section have to do with anything?
It's more of a motivation or justification of the work than anything -- they want to apply the paper's methods to control marine-related engineering systems which are becoming more important to the future of Taiwan.
It sounds like a non-sequitur because it was probably copied and pasted from the grant proposal to get funding for the work. The acknowledgments show that funding came from the "National Science Council of the Republic of China, Taiwan", so that organization would naturally want to know why they're funding work on fuzzy numbers and how it benefits the country.
Right, it just seemed like they didn't make any connection whatsoever between this particular research and a concrete naval/engineering problem. Kind of like saying "look this is math, engineering is important to Taiwan, engineers use math".
I don't know anything about SAGE outside of what I've read in a psychiatrist's blog, but isn't the reputation of SAGE Publications' journals trashy already?
Strangely enough, I just started getting spam from SAGE this morning about subscribing to their Sociology journals. My area is computer science so it seems they're just spamming university email addresses. Not too classy.
Academics are judged on stupid factors depending on the quantity of publications and citations rather than on the quality of their research.
The idea of bibliometrics is that you can compare on a single dimension individuals who do not work on the same subject, who use different methods, who do it in different communities which have different publication habits, etc. Of course this is completely stupid.
It's like saying that a cyclist who has participated in 10 Tour de France is better than a judoka who has participated to only 3 Olympic games. Only here you also give a job and funding to do it appropriately to the cyclist and say to the judoka to go f* himself until he participates in more Tour de France.
Of course, the thing is that you can't even have good criteria, because necessarily if you give criteria they become the goal, and instead of doing good research and publishing what is necessary when and where it makes sense, academics are forced to do research and have a publication policy that satisfy the arbitrary criteria. And even if the unlikely case where the criteria match what's better, it is impossible for it to be the case in the so many different fields of research.
The only good solution would be to judge academics by the two or three most relevant publications they have on the matter of the grant or job. That would require reading the publications, and if possible the report of the peer-review process (which would be enabled by open peer review).
The first and last articles have fittingly long "criticism" sections. A key point to make here is that these bibliometrics succeed at their basic goal (providing one platform by which to judge all academics/publications) at the price of misaligning incentives.
Resultantly: you do get to judge all academics on an even platform, but that platform is a weighted average of how well they do research and how well they play the politics/popularity game, scaled by the popularity of their field (good luck finding me a researcher in theoretical plasma physics with an h-index over 30 -- I'm not sure there are even 30 theoretical plasma physicists in the US.).
On top of this, sites like ResearchGate (like LinkedIn, but for researchers) give people their own score, which is pretty opaque, and display it in bright green next to everyone's profile picture. It introduces a lot of competition to a field that doesn't really need it.
It is a dire situation, but the fact that the researcher(s) went to great effort to get into that particular journal shows that at least the perceived quality of the journal is still being used as a rough metric for the quality of the research. So quality is still a factor, even if quantity has far too much weight.
If they had simply wanted to get published, then it seems like there are plenty of journals that will accept almost anything (they get paid on publication). More than 150 out of 300 open access journals accepted an obvious spoof paper last year[1].
Of course, the "impact factor" of the journal also counts, however it is gamed too. For instance there are publishers which artificially increase the impact factors of their journals by requesting that authors of accepted papers cite a few articles recently published in their journal.
Now, I would like to come back to the "article" you cite. Let me get this straight from the beginning: this text (by John Bohannon) is a piece of shit. Let me explain why:
The article aim to compare open access journals and closed access ones.
The method that is used for that purpose is quite remarkable.
It consists in sending a paper of very poor quality with wrong results in it to many gold open access journals (this means that there are what is called "Author Processing Charge": the authors pay the journal to publish), and to see how many of those will accept the paper.
What happens is that a bit more than half of them accept the paper with, of course, not a single sign of peer-review happening.
The author then conclude that open access journals are for the major part of poor quality, implying that they are worse than "traditional journals".
This conclusion is eminently ridiculous.
First, if you want to compare open access and closed access journals, you also need to actually test the closed access journals.
You can't just assume that they are good, especially when it's implied and not a single reason to think so is given (trust me it could be hard in many cases).
Second, the method makes no sense. No researchers sends his or her paper to unknown journals.
When a paper is submitted to a journal, the journal is chosen depending on at least two criteria: on one side its prestige, because the more prestigious, the more read, and the goal of an academics is of course to be read by the other researchers in his domain;
on the other side, for the seriousness of the journal (which is tacitly known in its academic community), because the peer-review process is very important for the authors (no academics like to have a paper published and then discover that there is a mistake in it).
Amusingly, this is something that scientific news magazine such as Nature and Science are not that good at.
Third, the real conclusion of the poorly carried experiment that the author makes is that the bibliometrics pressure on researcher (“publish or perish”) coupled to their desirable and natural wish to be more widely read, and thus to publish in open access journals, gave birth to an unhealthy publication business that is harmful to research and science (I recall that the author only experimented with authors-pay open access journal, calling it simple "open access", as if he was, strangely, paid by publishers lobby to increase the already existing confusion between gold open access (where authors pay) and open access, which exists in many different models).
To me, the real solution to all these problem lies in Diamond Open Access and Open Peer-Review.
I'm not qualified to comment on every point you made, but it's certainly true that his methodology was not very scientific. Some publications are quoted in the article as saying that they would have reviewed the paper later, and then they would have rejected it. As he wasn't prepared to pay the publication fee, he had no way of testing this.
Well I mean come on. He barely published more than ten papers! He hardly did anything at all. (except for that Higgs bosun thing; that was pretty good.)
It's pretty difficult to lose tenure (that's the point), but publication volume and quality are pretty much the metric for hiring and promoting faculty. Just like standardized testing at the undergraduate admissions level, it's pretty much inevitable that some people will attempt to game the metric instead of what it's supposed to reflect.
I think that hiring faculty will tend to be primarily about publications. But promoting faculty will also depend on pulling in grants - perhaps even more so.
Grants and publications become a virtuous circle. Getting grants gives you time and resources (and prestige) to get publications. Getting publications is the gateway to getting grants (before you have grants), and getting more grants (once you have grants). E.g. the measure of a grant-getter's success (at utilizing a grant) is generally the number and "quality" of publications the grant generates.
I agree, but you're leaving out the money part. Schools get a cut of all grants. From the school's perspective, the grant itself is valuable. So when it comes time to evaluate tenure and other promotions, they will look at both.
I would say the metric, at least in fields with a lot of money, is actually the amount of funding you bring in. Of course, publication record has a big influence on that, too.
This trickles down to the grad students and their relationship with their advisors. The deal we were given was: "publish three papers and you get your PhD". This creates some really perverse incentives on both sides.
that sounds like pretty clear expectations to me, much better than the nebulous "do something novel for a few years, impress some old people, and then maybe you'll get a PhD, just maybe." yes, the game is imperfect and always stacked against newbies (as any game is), but at least you know where the goalposts are. the experience of my peers in grad school mirrored this "three papers -> PhD" heuristic: http://www.pgbovine.net/PhD-overview.htm
I've heard that often in 'blind' peer reviews, the feedback will often be : "Paper would be improved by including reference to the work of XYZ" - essentially creating additional citations for author XYZ (who may, coincidentally, be the reviewing peer, and may feel that getting additional citations for their own work is the payment they require for doing the review work).
This definitely happens, but often you're asked to review a paper _because_ of your experience or publication record in that area. If someone neglects to cite your related work, should you just let that slide?
In some countries, you get paid to publish in scientific journals. It is a way for the country to get recognition. There is massive incentive then to publish, apart from career incentives.
It looks like there's an industry out there that would enrich a few coders and crypto experts with a tiny sum of money for a more tamper-resistant academic reputation system.
It could be worth more than Bitcoin. I only wish that I already knew more about crypto so that I could both get it done correctly AND be the first solution on the market.
Having a "reputation system" at all is already problematic in the case of research.
Research should not tend to satisfy arbitrary criteria. Nowadays with these h-index and impact factor crap, you see for instance journals which require (more or less officially) authors of an accepted paper to cite at least one or two other recent (1 or 2 years old max) papers from the journal.
Any reputation system will work only for its own benefit and not for the benefit of research.
I'm not at all saying that academics should not be evaluated. Only you can't judge the quality and pertinence of work at the state-of-the-art of so many domains with one (or even a few) number(s).
> There are more papers than anyone can read. So how do you tell which ones are important?
Let the researcher decide. When an academic applies for a job he/she can attach his/her two or three most relevant papers.
> And how do you objectively tell which professors are better than others at doing research?
You can't, that's my point, the question makes no sense.
> Whatever method you use, someone will try to game it.
Then don't. Don't use a method. Evaluate a researcher by actually looking at her/his research.
Yes, it takes longer to read a paper than to read a number. But the number makes no sense and who decided that we have to be in a hurry when deciding how to spend mostly-public money on research and researchers?
That's actually a good question. What legitimacy have privately held journals, owned by publishers who sell back the results of research to the researchers who gave it to them for free (in the bast case scenario, sometimes you get to pay them to give it to them), to get this much influence on how to spend public money on research? Because that's pretty much what bibliometrics amounts to.
If they game you, then game them back. Make the publication (and mitigation) of potential attack vectors into the reputation system worth more points than could ever be earned by exploitation of the flaw, and serve exploiters a large penalty whenever they are caught.
All you need to do is make the expected value of an attack against the system worth less than just doing some actual research.
"All you need to do is make the expected value of an attack against the system worth less than just doing some actual research."
you'll always have some people willing to take a gamble on the risk to get the reward, especially if it's a low % chance that they get any penalty at all. on top of the conscientious choice, humans are bad at estimating low-likelihood risks.