That’s the substance of a pair of articles by Adam Mastroianni at Experimental History. From the first article, The Rise and Fall of Peer Review, December 13, 2022:
Peer review was a huge, expensive intervention. By one estimate, scientists collectively spend 15,000 years reviewing papers every year. It can take months or years for a paper to wind its way through the review system, which is a big chunk of time when people are trying to do things like cure cancer and stop climate change. And universities fork over millions for access to peer-reviewed journals, even though much of the research is taxpayer-funded, and none of that money goes to the authors or the reviewers.
Huge interventions should have huge effects. If you drop $100 million on a school system, for instance, hopefully it will be clear in the end that you made students better off. If you show up a few years later and you’re like, “hey so how did my $100 million help this school system” and everybody’s like “uhh well we’re not sure it actually did anything and also we’re all really mad at you now,” you’d be really upset and embarrassed. Similarly, if peer review improved science, that should be pretty obvious, and we should be pretty upset and embarrassed if it didn’t.
It didn’t. In all sorts of different fields, research productivity has been flat or declining for decades, and peer review doesn’t seem to have changed that trend. New ideas are failing to displace older ones. Many peer-reviewed findings don’t replicate, and most of them may be straight-up false. When you ask scientists to rate 20th century discoveries in physics, medicine, and chemistry that won Nobel Prizes, they say the ones that came out before peer review are just as good or even better than the ones that came out afterward. In fact, you can’t even ask them to rate the Nobel Prize-winning discoveries from the 1990s and 2000s because there aren’t enough of them.
Of course, a lot of other stuff has changed since World War II. We did a terrible job running this experiment, so it’s all confounded. All we can say from these big trends is that we have no idea whether peer review helped, it might have hurt, it cost a ton, and the current state of the scientific literature is pretty abysmal. In this biz, we call this a total flop.
There’s a lot that elaborates on that. Then we get this:
I think we had the wrong model of how science works. We treated science like it’s a weak-link problem where progress depends on the quality of our worst work. If you believe in weak-link science, you think it’s very important to stamp out untrue ideas—ideally, prevent them from being published in the first place. You don’t mind if you whack a few good ideas in the process, because it’s so important to bury the bad stuff.
But science is a strong-link problem: progress depends on the quality of our best work. Better ideas don’t always triumph immediately, but they do triumph eventually, because they’re more useful. You can’t land on the moon using Aristotle’s physics, you can’t turn mud into frogs using spontaneous generation, and you can’t build bombs out of phlogiston. Newton’s laws of physics stuck around; his recipe for the Philosopher’s Stone didn’t. We didn’t need a scientific establishment to smother the wrong ideas. We needed it to let new ideas challenge old ones, and time did the rest.
If you’ve got weak-link worries, I totally get it. If we let people say whatever they want, they will sometimes say untrue things, and that sounds scary. But we don’t actually prevent people from saying untrue things right now; we just pretend to. In fact, right now we occasionally bless untrue things with big stickers that say “INSPECTED BY A FANCY JOURNAL,” and those stickers are very hard to get off. That’s way scarier.
Weak-link thinking makes scientific censorship seem reasonable, but all censorship does is make old ideas harder to defeat.
There’s a second article, The dance of the naked emperors (Dec. 22, 2023). Near the beginning:
At its core, this is an argument against scientific monoculture. Why should everyone publish the same way? You’d have to be extremely certain that way was better than all other ways—and that it was better for every single person!—and that amount of certainty seems pretty loony to me. Uploading a PDF to the internet worked for me, but there are lots of other ways people could communicate their findings, and I hope they try them out.
Later on Mastroianni notes that he’s
not worried about an onslaught of terrible papers—we’ve already got an onslaught of terrible papers. You learn very quickly to ignore them, just like you learn to ignore junk mail, stupid Netflix shows, and spam calls. Again, science is a strong-link problem. I’m not worried about how many bad papers there are; I’m worried about how many good papers there are.
After discussing various objections to his earlier article he weighs in on social dominance:
Scientists may think they're egalitarian because they don’t believe in hierarchies based on race, sex, wealth, and so on. But some of them believe very strongly in hierarchy based on prestige. In their eyes, it is right and good for people with more degrees, bigger grants, and fancier academic positions to be above people who have fewer of those things. They don’t even think of this as hierarchy, exactly, because that sounds like a bad word. To them, it's just the natural order of things. [...]
People who are all-in on a hierarchy don’t like it when you question its central assumptions. If peer review doesn’t work or is even harmful to science, it suggests the people at the top of the hierarchy might be naked emperors, and that's upsetting not just to the naked emperors themselves, but also the people who are diligently disrobing in the hopes of becoming one. In fact, it’s more than upsetting—it’s dangerous, because it could tip over a ladder that has many people on it.
But look, it should be pretty easy to defend a system that’s supposedly based on evidence. You should be able to count the costs and benefits of doing things the way we do them, and it should be clear that the benefits outweigh the costs. You should be able to lay out the data without embarrassment, you should be able to answer any good-faith objections, and you should be both honest about and comfortable with the limits of your knowledge. You definitely shouldn’t have to resort to invoking prestige or threatening to get me fired. If that’s all you got, you ain’t got much.
Here's a post where I discuss my own experiences with peer review: Rejected @NLH! Part 4: Déjà vu all over again at New Literary History + Welcome to the club, Franco! [#DH #Canon/Archive]. As the title suggests, there’s also some discussion of the problems Franco Moretti had with publications by Stanford’s Literary Lab.
And judging by the comments he got, a lot of people in the business see it the same way. The physician-scientist Bjorn Nordenstrom wrote his scientific work as a self-published book for this very reason. And he had the credentials to prove his chops -- Karolinska Institute, he sat on the Nobel Committee for medicine. He received a lot of negative criticism, but he is so ahead of his time. (Deceased now.) His work is as different a paradigm in physiology as Einstein's work on gravity is to Newton. Anyway . . .
ReplyDelete