Experts
Study Reports, Scientists are Poor at Judging Peers’ Work
When people want to improve in their field of work, the first group of people they might turn to are fellow peers. Since co-workers or colleagues share some of the same experiences, going to them for criticism or advice might yield the best results. However, this situation might not be the case for everyone. In a new report, researchers found that scientists are unreliable and have poor judgment for fellow scientists' research and published papers.
"Scientists are probably the best judges of science, but they are pretty bad at it," the lead author of the report, Professor Adam Eyre-Walker said. Eyre-Walker is from the University of Sussex in the United Kingdom.
For this article, Eyre-Walker worked with Dr. Nina Stoletzki in analyzing three ways of assessing scientific papers. The three methods included peer review, number of citations and impact factor. Peer reviews are subjective reviews in which other scientists provide their own opinions about the published work. The number of citations measures reliability of the work. Impact factor measures the importance of the study's findings. The researchers used two sets of peer-review articles to help them examine these three methods.
The research team discovered that scientists rarely agreed with one another about the importance of a specific paper or finding. Instead of the content, researchers found that scientists were very biased when it came down to where the paper was published. Scientists tended to rate papers published in high-profile scientific journals higher. Scientists also were influenced by the number of times a paper was referred by other scientists. These measurements do not take into account the content and the actual importance of the study's findings.
"The three measures of scientific merit considered here are poor; in particular subjective assessments are an error-prone, biased and expensive method by which to assess merit. While the impact factor may be the most satisfactory of the methods considered, since it is a form of prepublication review, it is likely to be a poor measure of merit, since it depends on subjective assessment," the authors wrote.
The researchers believe that more research examining the reliability of peer-reviews in the science field could be helpful. The study was published in PLOS Biology.
Join the Conversation