There are some good points here, but the sweeping conclusion (presented with utter certainty) does not follow from them.
It seems like the author's beef is with journals, rather than peer-review. If we did away with "peer review" today, journals would still have to operate the same way -- they'd still have many more submissions than they have room for, so a team of people (ideally, peers) would need to, uh, review those submissions according to some criteria. We can discuss whether the criteria should be adjusted, but I don't see how journals survive without gatekeepers.
So, fine, he wants to do away with journals. Without a description of an alternative system, it sounds like the best researchers would just...upload their stuff to Arxiv and hope that someone reads it? Again, I'm not saying there is no alternative, but because he spends all his time arguing against "peer review", he spends no time discussing alternatives to journals that would solve more problems than they create.
He addresses the question "can we fix peer review instead of replacing it" by discussing ways that fixes have failed in the real world. So what makes him think that a replacement would be easier? The "burn it all down and rebuild it according to my preferences" approach also doesn't have a great track record!
And the certainty with which he states his conclusions gives me a sense that this is not someone who's super open to feedback.
He did make an example by publishing his paper on his own blog. So maybe every researcher can put a blog and upload papers there and expect them to appear in someone else's browser who followed the researcher because either he shared something interesting or someone else shared his paper?
I've always wondered why academia can't work like social media. Not Twitter or Facebook, which is algorithm-controlled, blogs would work where everyone maintains their own sovereignty.
Ah, but there are also systemic issues that prevent Congress from doing the things you'd like them to do. For instance, Congress is vulnerable to money. Journals have money they can use to lobby. Are there other players with money and incentive to lobby for the "burn it all down" side? Which mountain of systemic issues is easier to climb? Sounds like an empirical question.
Solving the problem by either approach is going to be really really complicated and full of compromises you won't like, and anyone who talks like their solution is super simple and obvious should be read with skepticism.
Had one paper with 5 reviewers: 4 thought it was pretty good or awesome, the 5th not... With about 7 pages' worth of criticism that were mostly invalid. I resubmit, with modifications. That same week a paper of virtually the same study (not plagiarized, just coincidentally the same study) is published by a heavyweight 6 hours up the road from us. So reviewer 5 says, even if the modifications are good enough, this has already been done by a bigger, better researcher and is therefore not original. Paper gets rejected outright.
So my supervisor does some digging, and guess who reviewer #5 just so happened to be...
Write to the PC chair and the steering committee of the conference. Those kinds of actions do have consequences. At a minimum you should be able to request a conflict-of-interest review of the situation.
2- Outside of engineering (or at least in the Chemistry/Bio field where I came from), conference submissions are abstracts not papers. There may sometimes be special issues where speakers are invited to submit a paper after the fact. But "paper" here means drafting the full manuscript and submitting it to the publisher when you're ready to be published in a future issue. So no steering committee, just an Editor-in-Chief who conducts research in the field. In fact referencing conference abstracts was always frowned upon because they never contained enough information to be properly scrutinized.
Speaking of, this whole "reputable papers only happen with conferences" phenomenon was unheard of to me until I joined an engineering company
I'm confused. You had 5 reviewers for an abstract and one of them produced 7 pages of criticism? I'm aware of many venues in Chem that are abstract submission, but it sounds like you were in a full-paper situation.
OP never said it was an abstract - they are from chem/bio (like me), where papers are submitted primarily to journals, not conferences. They mentioned abstracts in response to your comment about writing to the "steering committee of the conference"
Academia is fundamentally broken. All the incentives are messed up and I have a hard time trusting anything that has been "researched" at all. Really disappointing.
Whether the "peer review" experiment has succeeded or failed is debatable... but anecdotally, everyone I know in academia feels the current publish-or-perish peer-reviewed system is BROKEN: It incentivizes everyone to pursue low-risk, incremental, mostly inconsequential, easy-to-explain research with high certainty of short-term success, instead of high-risk, exploratory, consequential, hard-to-explain research with high uncertainty of long-term payoff.
This is true:
> From antiquity to modernity, scientists wrote letters and circulated monographs, and the main barriers stopping them from communicating their findings were the cost of paper, postage, or a printing press, or on rare occasions, the cost of a visit from the Catholic Church. Scientific journals appeared in the 1600s, but they operated more like magazines or newsletters, and their processes of picking articles ranged from “we print whatever we get” to “the editor asks his friend what he thinks” to “the whole society votes.” Sometimes journals couldn’t get enough papers to publish, so editors had to go around begging their friends to submit manuscripts, or fill the space themselves. Scientific publishing remained a hodgepodge for centuries.
This is true as well:
> That all changed after World War II. Governments poured funding into research, and they convened “peer reviewers” to ensure they weren’t wasting their money on foolish proposals. That funding turned into a deluge of papers, and journals that previously struggled to fill their pages now struggled to pick which articles to print. Reviewing papers before publication, which was “quite rare” until the 1960s, became much more common. Then it became universal.
Finally, this is also true:
> Now pretty much every journal uses outside experts to vet papers, and papers that don’t please reviewers get rejected. You can still write to your friends about your findings, but hiring committees and grant agencies act as if the only science that exists is the stuff published in peer-reviewed journals. This is the grand experiment we’ve been running for six decades.
"everyone I know in academia feels the current publish-or-perish peer-reviewed system is BROKEN: It incentivizes everyone to pursue low-risk, incremental, mostly inconsequential, easy-to-explain research with high certainty of short-term success, instead of high-risk, exploratory, consequential, hard-to-explain research with high uncertainty of long-term payoff."
That is what I keep hearing from my academic acquaintances as well - When a measure becomes a target, it ceases to be a good measure. And this is what happened with publish or perish.
Let's not forget that Katalin Karikó, a giant in the mRNA field, was demoted by UPenn in 1997 because they considered her work unproductive. But mRNA was precisely the topic that required a multi-decade commitment.
Fortunately she was stubborn like hell. God know how many other people give up after being degraded like that.
> But mRNA was precisely the topic that required a multi-decade commitment.
It all comes down to this ^ for me. Advances take decades from start to finish. Institutions/organziations (including funders) need to start not just funding innovation (1-5 year cycles), but investing, really committing, to 20-100 year plans, at minimum.
My biggest fear is that hands are thrown up in the air, and science turns into yet another "content" producer (there are signs, to me, that this is exactly where we are headed). Can't produce science^D content fast enough? Skip peer-review. Community of experts getting in your way? Ignore them. We need to get science into tik-tok, because think of the reach for STEM! Really, do we need science to equate to 15-second dopamine hits?
None of this to say that there aren't problems with peer review, but, at least in the field I'm in, peer review is critical, useful, and an important aspect of becoming a scientist. If anything, learning to peer-review teaches one how to critically evaluate others work. Are there people who aren't taught how to Peer review? Sure. Does this mean that we should stop peer-review? The logic doesn't follow.
I've had my ups and downs with peer review. I'm trying to collect stories of papers from ideation to publication, while also describing the struggle for my own papers.
Looking at my past experiences for all my papers, I can only conclude that the process is more nuanced than can be described in a tagline like "peer review is broken" or "publishing is random", so I'm interested in hearing everyone else's experiences too.
The article makes it clear that peer review and funding are intricately linked and that one would not exist without the other, at least in the beginning.
It does not make clear that at all. It does not claim peer review can not exist without things those scientista complain about (publish or perish etc).
The Sydney Brenner quote around the mid-point of the page captures the reality quite well, "it’s simply a regression to the mean". Or, given the typical peer review mechanisms, regression below the mean. If the original author is above average and the paper has three reviewers, any one of which can block publication, what is the likelihood that the paper will be blocked by someone less competent than the author? If reviewers are sampled at random from the scientific population (they aren't, it's usually worse) then blocking the paper is almost guaranteed to be blocked by someone less competent than the author.
A few journals are slowly moving toward a better system where publication happens first, then the reviews are published. As a scientist, I would certainly prefer to read the reviews without them blocking/delaying publication.
However, the real problem with peer review is its role in awarding grants, not in publishing papers. I have no idea how to fix that problem without introducing other problems. Too much incentive to game the system.
This argument makes no sense. Why would "less than average" referees block a paper? And why would editors select referees from the population at random? I don't know about other fields, but that's certainly not how things work in physics.
I would say it's more like a club where everyone knows each other. If you are authoring a paper you often know who the potential referees are (those that published most in the field), and you politely ask the editor to exclude referees that are in open conflict with you. Editors usually abide and you get a reasonable review of your paper.
I agree about most other issues raised about peer review in general and related incentives, but the issues raised here are not some that I recognize.
In biology (my area), it is quite frequent for reviewers to "suggest" additional analyses/experiments that reference the the reviewers work or sub-field, which delays publication and inflates citations of reviewers. Thus, some people seem to make a strategy of volunteering for review to get more citations.
My understanding is that physics has used arXiv much longer and more consistently than biology has used bioRxiv, so communication suffers less from the impact of delayed (not just blocked) publication.
Physics is a smaller research community than biology, so it is not surprising that club effects have more pronounced. Some less funded sub-fields of biology definitely have that feel (e.g., ecology and evolution), but anything remotely biomedical seems to suffer from the rando reviewer effect much more.
In my experience, the historic "squishiness" of biology and the rapid growth of NIH funding (and hence graduate students mills) leads to a larger population of less rigorous thinkers than in physics. We don't have math requirements to filter those folks out.
> All we can say from these big trends is that we have no idea whether peer review helped, it might have hurt, it cost a ton, and the current state of the scientific literature is pretty abysmal. In this biz, we call this a total flop.
"We can't say anything about this, so it's clearly a total failure." lol Seems like the author wants this both ways.
Also, I don't know why they think replication would be more likely if we didn't have peer reviewers at all. Feels like a larger jump from "your study won't be reviewed" to "your study must be replicated" than from "your study is being reviewed" to "your study is being replicated".
I'm not sure about the pros and cons of peer review but if anyone is claiming that that are doing science, then they must publish raw data so that others can review it. This was the first thing that jumped out at me after reading "Accelerate" - so many claims that everything based on "data". I don't necessarily disagree with the conclusions in the book but I think really just a formalization of Google's development practices. The book should have been called "How we do things at Google".
Well in the book that actually has that content (and very nearly that title: https://abseil.io/resources/swe-book) they explain that they set themselves up to follow the best practices found by the company that eventually published Accelerate, even before google bought them
>> they explain that they set themselves up to follow the best practices found by the company that eventually published Accelerate
Do you mean as far back as the early days < 2000 or did this happen well after they were established (> 2015). If the latter, how much actually changed?
Wow. Pretty clear this paper was not peer reviewed. Many of the arguments would not get past a high school English teacher. ‘A whole lot of money for nothin’ perhaps, but that money is not spent on peer review. Peer review doesn’t improve research productivity …. This reminds me of the argument that we should not spend more money on education, because education budgets have gone up, but test scores have not. The argument implies (but does not state, because of the absurdity), that spending less money would raise scores. And, remarkably (/s), peer review does not guarantee that all scientific results are reproducible and correct.
As the author points out, no one has done the control. We don’t know how much worse the scientific literature would be without peer review, and we don’t no how many important insights are lost because of poor reviews. But as someone who has done a lot of peer review (and has had papers rejected with uninformed reviews), it is difficult to state how much worse things could be. (But perhaps that is an overstatement, as very few papers do not find their way into some journal).
The scientific literature today is overwhelming. Peer review helps make it less so.
I do not understand this argument. Just as peer review is relatively young, so is the massive expansion of the scientific literature. Perhaps in the 50’s (when there was peer review, at least in biochemistry and microbiology), an investigator might reasonably believe they were familiar with most of the literature in their field. That has not been true for decades. So, by volume (if not impact), most scientific results are young and have been subject to peer review.
Perhaps part of the argument has to do with the specific form of peer review. In the 18th and 19th centuries, much more science was communicated via correspondence and books. But I suspect there was still a review process, just one that was less democratic and more dependent on reputation.
Far fewer people die of preventable diseases and malnutrition than any point in history; we've accomplished incredible things, especially since 1960. What are we writing this on?
> it is difficult to state how much worse things could be
Because there's no proof. The only thing we know for certain, is that a lot of money is being spent on it, and that some of the mechanisms that are supposed to work can't work (i.e. in the case where people look at and trust preprints anyway).
There will definitely be anecdotes of peer review having caught things (though it'll be harder to show that that prevented harm). But likewise, there are also many, many anecdotes that it has been actively harmful.
No proven upside, limited anecdotal upside, and proven downside. If peer review as a publication gatekeeper hadn't already been a thing, we wouldn't introduce it — that would be considered unscientific. Sounds like something that's ripe for reconsideration.
I don’t understand how we know that a whole lot of money is spent on it. A lot of volunteer (unpaid) time is spent on it, but not much money. And I’m also unaware of the “proven” downside.
But you might ask yourself, why do so many scientists waste so much of their time doing peer review. Perhaps they are ignorant, or easily duped, or perhaps they believe that peer review is a valuable use of their time.
The proven downside is the money/time spent on it. And plenty of that "volunteer" time is time researchers would otherwise have spent on research. (Note: I'm not counting the time spent doing the review and feeding that back to the researcher — that can be valuable. I am counting time spent doing countless revisions just to please a reviewer, time spent resubmitting work elsewhere, etc.) And don't get me started on money wasted on price-gouging publishers thanks to their role as gatekeepers of career credentials.
The question is not why scientists spend time doing peer review. The question is: why do academics waste time bending over backwards to adapt to the whims of a particular reviewer even beyond when they feel the input is useful? (And the answer is above: the career credentials.)
Again, I'm not agains peer review by itself. But it should not be a gatekeeper to publication, nor have this much leverage on people's careers (a binary decision for every article).
For starters we know that a lot of money is spent on it in terms of getting the end results. As you say the peer review is done by volunteers, but the final product is then sold back to those same reviewers (or their institutions) for an insane markup.
> or perhaps they believe that peer review is a valuable use of their time.
Or perhaps they believe that if they don’t do that work then bad papers will be published under the authoritative rubber stamp of the journals and that they will not need to spend the same amount of time down the road finding out if/how the fascinating study is wrong after having paid 40$ for the pdf.
That time is coming from somewhere -- it's time that's not spent doing actual research. It's certainly not free.
When I was in academia, I did peer review initially because it was fun to be on the other side for once. And later mostly as a favor to the editors who asked me.
> it's time that's not spent doing actual research
Actual research includes keeping up to date with peers progress.
If you were doing research but not regularly reading new publications anyway, either you were working on a less-fundamental-research side of things, or, without even realising it, you were exploring directions already explored or losing time by not profiting from a new useful approach.
It is just incorrect to say that the time used to read carefully a peer paper is taken over the time of doing research: if you were not peer-reviewing this paper, you maybe would have been reading carefully the same paper in order to keep up to date with your peers.
In fact, one can lose its time peer-reviewing, but its their own fault: in general, you just need to target the review for articles that are useful for you to read anyway.
I found some things wrong. Occasionally these were minor things that the authors corrected. Sometimes they were major problems, and I recommended that the paper be rejected because they were unfixable. In that case I'd usually see the same paper get published in a different journal later on.
So your conclusion should be: peer-reviewing is working, but some people are not using it right.
You then need to assess exactly what is this percentage of still being published despite having things wrong in it. If this percentage is small, peer-reviewing is still more profitable then no peer-reviewing at all (in which 100% of the bad papers are published).
Have you ever worked in research on a topic a bit trendy or fascinating for the general population? Because the world is full of people convinced that they are the new Einstein
A paper that is not peer reviewed is basically like a blog article. There are great blog articles, but also a lot with basically nonsense written by people who think they know something but don't. There's a lot of noise. How is someone going to know, before they read it, whether they should spend time on it?
It's a bar to clear. It helps to increase the signal to noise ratio. It's not perfect - bad articles get published, good articles get rejected. But imperfection is part of life.
One thing is that knowing your paper is going to be peer reviewed might discourage a lot of cranks from trying to publish. Also, I believe editors have the ability to outright reject papers before they even go out to peer review. This cuts down on completely irrelevant and terrible articles.
> A paper that is not peer reviewed is basically like a blog article.
The one thing has nothing to do with the other.
I have seen many "peer-reviewed" articles regarding Covid in 2020-2022 that were no better than blog posts. It isn't a panacea, and the last two years have drawn into sharp relief one of the major problems of the system: confirmation bias.
Peer review is worth no more than the "peers" who review it, their objectivity, personal investment in the claim, and the level of effort they put into the review. For example, if the "peers" are a group of people who are all seeking to confirm their prior beliefs about a controversial topic, the peer review process is almost completely useless as a signal of quality. Papers that confirm groupthink are routinely shunted to the high-status members of the group for like-minded "review" and rubber-stamp approval. Papers that challenge groupthink are almost invariably nit-picked and/or dismissed entirely. Many never even make it to peer review, because the journal editor rejects the submission.
Throw in issues like reviewer bias toward "established" authors and fields, as well as lack of blinding and selection bias amongst the reviewer pool, and you have a recipe for a formalized system of groupthink.
> It isn't a panacea, and the last two years have drawn into sharp relief one of the major problems of the system: confirmation bias.
I believe I was explicit in saying it wasn’t perfect.
What alternative is there that doesn’t have something resembling peer review, isn’t curated by third parties, doesn’t suffer from confirmation bias/groupthink, and doesn’t leave it up to authors to sift through 100s of papers of unknown quality a day to learn about what’s going on in the field?
It's not just "not perfect"...it's pretty much useless as a signal of quality.
Certainly, it doesn't come close to the level of credibility that you've attributed to it (i.e. papers without peer review are somehow "blog posts").
> What alternative is there that doesn’t have something resembling peer review, isn’t curated by third parties, doesn’t suffer from confirmation bias/groupthink, and doesn’t leave it up to authors to sift through 100s of papers of unknown quality a day to learn about what’s going on in the field?
There's no free lunch. Nobody serious in any scientific field relies on peer review to determine paper quality. You have to read the paper in front of you and use your experience, which means you must read not just "100s of papers", but literally THOUSANDS of papers, in order to have any reasonable chance of being a credible evaluator.
If you don't have this experience, you can't know the quality of a piece of scientific literature. That's just how it is.
Apart from active confirmation bias, simple blind spots can also cause a lot of trouble. As one example heuristics and biases researchers Stanovich and Toplak found that their Actively Open-minded Thinking (AOT) instrument had a bug. It stood for twenty years because the bug was consonant with a secular-liberal worldview so no one really thought it might be wrong. https://doi.org/10.1016/j.cognition.2019.03.006
This, of course, when the scientists themselves are honest, acting in good faith and eager to fix their mistakes as Stanovich and Toplak were.
In openly political fields like sociology and whateverstudies, you get "researchers" blatantly manufacturing their conclusions in scale construction.
As one example of some of the gems of scientific integrity in these fields is the finding that conservatives are higher in hostile sexism. Will any liberal check this? Of course not, because it conforms to their expectations that their political opponents are bad. Even if they are honest, if they just glance at a paper they won't question it, and it won't feel weird to read in a quick report on the research.
How was the sausage made? In at least one study, the entire hostile sexism scale was two items long, and one item was, not kidding, "feminists are making entirely reasonable demands of me." And in the greatest surprise ever people dislike their political opponents. Bam, conservatives bad, because we're measuring their attitudes towards politically partisan activists, not the fairer sex.
(The question also presumes that feminists' demands are reasonable. If the activist set are insane, a low score on the question would be an indicator of sanity, not misogyny or simple political partisanship. As an example, Simone de Beauvoir: "No, we don't believe that any woman should have this choice. No woman should be authorized to stay at home to raise children. Society should be totally different. Women should not have that choice, precisely because if there is such a choice, too many women will make that one. It is a way of forcing women in a certain direction.")
There's also the famous case of Bob Altemeyer's Right-wing Authoritarianism scale which is explicitly constructed so that leftists will score lower on it than rightists. Cue decades of results that authoritarianism is more prevalent on the right. Quelle surprise.
I guess they'd know the same way as they do for blog articles? I'm confused about the premise here. As the author says, all the academics I know already do spend lots of their research time reading blog posts and preprints other sources which haven't been vetted for quality.
And one can come up with plenty of alternative ways of discovering relevant research without blocking or delaying publication or having researchers spend many hours bending over backwards for all kinds of minutiae that don't meaningfully help with that goal.
(Disclosure: I volunteer for one attempt at such an alternative, https://plaudit.pub.)
>Wow. Pretty clear this paper was not peer reviewed. Many of the arguments would not get past a high school English teacher.
How is that clear? There are plenty of papers out there that contain glaring errors and got accepted anyway, if you regularly read publications you should know this better than anyone.
Also you confirm one of their criticisms and contradict your own argument that peer review helps improve scientific literature. What do you base that on, apart from belief?
> This reminds me of the argument that we should not spend more money on education, because education budgets have gone up, but test scores have not. The argument implies (but does not state, because of the absurdity), that spending less money would raise scores.
The argument perhaps implies that spending less money would not lower scores, but as put here does not imply anything about raising scores.
I've long wondered if a part of peer review needs to be replication.
OR if there needs to be better incentives around finding nothing out. Like, given the goal is getting published, it feels like there should be better incentives for papers around not finding anything in an interesting way, or hypothesis testing and discovering your hypothesis was wrong.
Or that there are many factors that affect test scores (and many more alternatives competing for students’ attention). And, looking at average scores and average spending probably makes more subtle relationships more difficult to tease out. But it is hard for me to believe that increased funding is a bad thing for students.
> (When one editor started asking authors to add their raw data after they submitted a paper to his journal, half of them declined and retracted their submissions. This suggests, in the editor’s words, “a possibility that the raw data did not exist from the beginning.”)
I'm not a researcher, but I've interviewed a lot of researchers in the context of data sharing. I'd go out on a limb and attribute this reluctance not to lack of any dataset whatsoever in most cases, but to a feeling of ownership over your "IP", and an instinct to protect it. In a niche scientific domain, it's not totally paranoid to worry that you could be giving your data to the competition. I heard this again and again, though mostly it was everyone else who did it, not the interview subject...
There's also just the work involved in getting the data in shape to send to people, which can be a pain in the butt.
The "experts" very rarely take enough time to actually understand the paper and question its assumptions, and often aren't even qualified to do so.
I had one article on the Xerox Star, way back in the day, that passed peer review for TOOIS with just some reasonable suggestions to cite related work.
was rejected by CACM, by an "expert" review at Microsoft Research who admitted he knew nothing about patent law. Microsoft is a big believer in software patents, so he's hardly unbiased.
His comments said at one point, because he talked to a patent lawyer "the patent law has changed!" while clearly not knowing that the laws on obviousness had not changed at all.
* There are less great discoveries because there are less great and easy things to discover
* Time has filtered the scientific studies of the previous centuries
* You don't look at the efficiency of a filter (peer reviewing) by the amount of shit that pass through, but by the rating between this amount and the amount of shit that doesn't pass through
* A lot of researchers don't want to release their raw data not because they invented their data but because they want to keep the monopoly for them
* Peer reviewing has failures and could be improved, but it is the best and quickest way to judge a scientific work (with credentials let's be honest)
I strongly disagree with the first point: As a society the more we discover, the more we increase the surface of the discoverable and the more we can combine things. It only gets harder for the individual because it takes longer and longer to learn everything necessary to get to the frontier of the unknown. It is not a problem of running out of viable research topics.
I agree with the easy part. The low hanging fruit has mostly been discovered. If you want to invent something new you’re probably going to need a team of SMEs working on it.
> Weak-link thinking makes scientific censorship seem reasonable, but all censorship does is make old ideas harder to defeat.
Reduce the social, political, professional, and academic cost of "being wrong", learn to have a little more empathy and suddenly negative reviews will hold more meaning once again. People will learn to learn instead of learn to defend. (Of course, defense is a skill worth learning as well. Please don't take what I'm saying as this-or-that.)
You simply cannot do away with peer review! What you can eliminate is the control strictures that exist today which punish failure unfairly. This is true at all levels, from the highest institution right on down to that conversation you had with a colleague over coffee yesterday.
AI peer review might be a solution. This would hopefully tend to limit rejections to technical issues like a poor choice of statistical analysis, and eliminate ideological conflicts.
Where peer review gets a bit more shady and political is in the federal funding system, where the outcome is not a publication, but delivery of thousands to millions of dollars in federal funding. That's where politics really comes into play, although nobody in any field wants to talk much about it publicly.
Peer as in the aristocratic tradition of people of equal social standing and status? That's how you end up with institutional corruption, one hand washing the other, PI's rubber-stamping each other's shoddy work because members of the club are expected to support each other, and so on.
Or, peer as in someone of equivalent actual knowledge and experience? AI isn't there yet, but I imagine a machine learning model trained on published papers in a given field would be able to quickly spot a lot of simple errors and/or poorly supported conclusions.
A general purpose AI is going to struggle with technical concepts and terminology that it has never learned before, but the whole point of many papers is to introduce these very concepts and terms.
That's the key here: if peer review is feedback rather than a barrier to sharing your findings and progressing in your career, then negative (though of course, in this situation review would no longer be a binary negative/positive) feedback could be considered input again. But the current setup of peer review, like this article is arguing against, assigns a high cost to "being wrong" (in the eyes of about two reviewers).
Right, but where is the threshold for wrongness which is so widely accepted as truth that it becomes harmful enough to be actively removed? If I began preaching of the correctness of 1+1=3 surely someone would deny my publication, no? Or, the critics would outweigh the proponents. How can this be accomplished without division or empathy?
You'd still have peer review, but in a very different sense. Instead of a formal process by 3 reviewers, you'd have an informal process by everyone who read the paper. People would read your dumb 1+1=3 paper, realize that it was wrong, ignore it, and pay less attention to you in the future.
> That debunked theory about vaccines causing autism comes from a peer-reviewed paper in one of the most prestigious journals in the world, and it stayed there for twelve years before it was retracted. How many kids haven’t gotten their shots because one rotten paper made it through peer review and got stamped with the scientific seal of approval?
The point is not to prevent it from getting out there in the first place — that is already possible, you can just upload a PDF to some website. Doing that for a paper arguing 1+1=3 wouldn't be any more accepted if peer review's role would be limited.
Peer review is not the reason academic research has turned into a garbage fire - it's because of the corporatization of the university and the transition from the open research model to the patent-seeking profit-generating research model.
The latter has its place, certainly, but before the 1980s that place was industry-funded private research institutions, the most famous and well-known being Bell Labs. These institutions were funded by groups of corporations who in return got first dibs on their patentable work, and the researchers and engineers in turn got good salaries and didn't have any teaching responsibilities.
After Bayh-Dole passed in the 1980s, universities were allowed to grant exclusive licenses to their patentable research to private entities, and this resulted in the rise of the corporate univesity model - people running around hiding their data until they got the patent, people fudging results to make their novel drug look good so their startup would be bought by some pharma corporation, administrations being overrun by pharmaceutical executives who only cared about universities generating a steady stream of patents, and forget about quality-of-teaching.
It's been a complete disaster, and not only that, the private research world has largely been defunded as corporations realized that, with the aid of politicians, they could just outsource basic R&D to universities, using grad students as cheap underpaid lab labor on projects where they're little more than glorified lab techs, and not really doing any original research.
In addition, the whole system is increasingly authoritarian and censorship is rife - any academic who calls for the elimination of exclusive licensing and the adoption of open-source drug discovery will soon find themselves without a job. It's similar in many ways to the Lysenko era in Soviet science, just run by corporate authoritarians instead of Stalinists. Avoid if possible.
I would add that the funding source for most research now is governments, which means that research is now optimized for getting government grants instead of for actually advancing science.
>> (Only one of Einstein’s papers was ever peer-reviewed, by the way, and he was so surprised and upset that he published his paper in a different journal instead.)
Yes, well, how shall I put it? Not everyone who attempts to publish a paper is Albert fucking Einstein.
Humans can game every system. Then you can tweak the system to make it better for a period of time till it is gamed again. Rinse, wash and repeat. Same flaws exist with all ratings systems and many are just mathematical garbage.
I did my time in the peer review trenches trying to give constructive feedback and reproduce results, only for editors to stop asking me to review or saying "don't waste this much time on this"
That’s hardly the only way to be wrong. Suppose you made a mistake in the synthesis process and the dosage described doesn’t match the dosage revived by patients. Now the paper is incorrect but possibly still quite useful.
I see a few major problems in the pursuit of modern scientific inquiry.
- Bureaucratization. The modern university model is basically the DMV. It functions to sustain the lifestyle of the administrators in charge and to prevent them from losing power, money, status. And no one enjoys it, not the employees, not the customers, etc. and yet who can change it for the better? It can and does only get worse, more bureaucratic, more soul sucking. Ask the post-docs.
- Feminization of academia and science. This is related to the bureaucratization. Process, safety, paperwork, meetings and community consensus are paramount. All becomes politics. Everyone must agree. Only small questions can be answered. You need permission for everything. Anything truly novel is considered a threat to the scientific community. Modern day "scientism" and other beliefs are the replacement to christianity in the west, and women are more religious (look it up), and academe is the modern day church. All dissent or inquiry is squashed.
- Denial of Great Man Theory. Great men invented modernity, and of course our modern industrial state requires many layers of managers due to the huge complexity, so sizes of labs, colleges, corporations, assembly lines, supply chains have exploded, and no one person can manage it all in their mind.
The false belief is thus: modern industrial society requires the managerial class to function, therefore the managerial class invented it, and it requires the managerial class to progress. Therefore great man theory, or the idea of the innovative genius, is supposedly debunked.
And yet you see nearly all major advances come from tech bros like Elon Musk et al who are trailblazers, ignore complaints of the bureacracy, etc.
- Could the university system have produced facebook, spacex, tesla, microsoft, apple? You can go on and on.
Please don't interpret this as an attack on women or femininity. Society requires everyone for it to function properly, and everyone has a positive contribution to make, but we have to be able to iterate and change when we realize certain modes of endeavor simply do.not.work.
Interesting political philosophy, but little that speaks to the pursuit of science. Universities may be bureaucratic, but those bureaucracies have almost nothing to do with scientific agendas. Likewise, grant review panels work hard to find innovative, not consensus, proposals. Again, the main thing the management does is demand more external funding, it does not set scientific direction. And, of course, Facebook, Microsoft, etc is not science- at best it is engineering.
> Likewise, grant review panels work hard to find innovative, not consensus, proposals.
I don't want to give specific examples of research topics to avoid poisoning the well, but could you argue with a straight face that an academic that wanted to do research into some topics that could be likely to yield certain types of politically incorrect conclusions wouldn't face extreme difficulty getting funding or extreme risk to their career?
I think it's easy to see how much scientific research on certain topics could be stuck within a narrow range of opinion because people are more concerned with what gets funding or doesn't get them shunned.
I do not know anything about grant panels outside my field of biology. But I am certain that the overwhelming majority of grant money is spent on scientific questions that have virtually no obvious political dimension. I’m sure there are grant applications that have a substantial political component, but I would be surprised if they accounted for even 5% of research funds. Social and political science receives a very small fraction of research funds, and while one might argue that allocations of health research budgets are politically shaped, viruses and oncogenes have no politics.
Peer review incentivizes the worst features of humanity: it affords peers in your field the ability to reject your paper because they want to take the idea for themselves and then publish it.
In the corporate university, the usual game is to get the patent approved first, and hold off on the publication until that happens. At least that's the case in applied technical science - the social science world seems more like a club where the involved players just rubber-stamp each other's non-reproducible work in order to up their publication count and pack their CV. The actual work has no value, so why would anyone bother stealing it?
If that were true we'd see more patent applications in, say, engineering, than journal papers in the same field. (Or at least around the same order of magnitude). Do the numbers check out?
My experience was: Most papers are not useful for patents. Corporate funding might require a pre-publication review for patent worthyness, and if it is there, it is applied for before publication.
Most researchers I knew had little interest in patents, so non-corporate funded papers got published without patent considerations. When it came up, it was a before-you-publish-think-about-it topic.
Peer review in academia may very well be like the suppression of prostitution in Aquinas' thought: yeah, prostitution is really bad, but if you try to suppress it, things will become far worse. The prudent statesman, while rightly repulsed by prostitution as such, humbly recognizes the limits of his power to constrain human vice through legal means. Of course, that doesn't mean that there aren't ways to encourage a populace toward self-reflection and self-discipline that don't involve legal suppression.
I'm so happy to be able to share this testimony with all of you, I invested 6BT, in an online investment bitcoin platform. Scammers stole all of my belongings! I have abandoned everything! I was concerned about breaking my promises to my family this Christmas because I had expenses to cover. I talked to a handful of my friends, but all of their responses were unfavourable. When I went online, I came across an article promoting Spyware Cyber, A hacker group that specialized in cryptocurrency recovery. I made the decision to get in touch with them, and in three days, spyware(@)cybergal(.)com was able to recover all of my stolen bitcoins. It was amazing to have my lost Bitcoins back. Anyone who has likewise lost money in Bitcoin should read this post. and get in touch with them to retrieve back your stolen funds.
It seems like the author's beef is with journals, rather than peer-review. If we did away with "peer review" today, journals would still have to operate the same way -- they'd still have many more submissions than they have room for, so a team of people (ideally, peers) would need to, uh, review those submissions according to some criteria. We can discuss whether the criteria should be adjusted, but I don't see how journals survive without gatekeepers.
So, fine, he wants to do away with journals. Without a description of an alternative system, it sounds like the best researchers would just...upload their stuff to Arxiv and hope that someone reads it? Again, I'm not saying there is no alternative, but because he spends all his time arguing against "peer review", he spends no time discussing alternatives to journals that would solve more problems than they create.
He addresses the question "can we fix peer review instead of replacing it" by discussing ways that fixes have failed in the real world. So what makes him think that a replacement would be easier? The "burn it all down and rebuild it according to my preferences" approach also doesn't have a great track record!
And the certainty with which he states his conclusions gives me a sense that this is not someone who's super open to feedback.