©cartoonbank.com. All Rights Reserved.

The peer review system is not broken. But journal editors should do something beyond normal peer review to ensure the integrity of the data they publish. But they only have to do so for the papers that might make the journal look really bad if the paper does, in fact, turn out to be wrong.

These are the basic messages of the report from an external committee convened by Science to investigate their handling of two papers by Woo Suk Hwang in the now infamous stem cell fraud case (1). Although the committee took a step in the right direction by calling on journal editors to take more responsibility for data integrity, it provided the misguided recommendation that “special scrutiny” should be applied to manuscripts most likely to have “consequences for the reputation of Science and science”.

This advice resulted in a shift of the public's dialogue with the editors of Science from the question of what steps they can take to help ensure data integrity (2), to the question of what defines a “risky” paper (3). Such a discussion is wasted effort—standards of data integrity must be applied uniformly to every paper published and not selectively to an ill-defined subset of papers. The implication by the committee that integrity only counts for high-profile papers is particularly dangerous at a time when the public is questioning its trust of science and scientists.

The committee also makes the elitist recommendation that Science and Nature and “perhaps a few other high-profile journals” should work together to establish common standards for ensuring data integrity1. Even more important than developing standards, however, is enforcing them. We at the JCB developed standards for the integrity of digital image data four years ago, and we screen every image in every figure of every accepted manuscript to ensure they do not violate those standards. Many journals have adopted our standards in their instructions to authors, but most do not enforce them with routine screening. This perpetrates a fraud on the community by implying that the papers published in the journal are actually held to those standards when they are not. Only slightly better is the Russian roulette policy of Nature, who recently started screening a single paper in each issue of the journal (5).

One of the few journals outside The Rockefeller University Press that does routinely screen images for manipulation is, in fact, Science. Their image screeners have been trained by our screener, but their Editor in Chief insists that these methods would not have picked up any problems in the Hwang manuscripts (6, 7). This is simply not true, as I have noted elsewhere (8) and show again here (Fig. 1).

I have also consistently acknowledged that image data is only one of many types of data we publish. But by their very nature, digital images can be easily examined for evidence of manipulation. Of course, standards for other types of data can and should be developed and enforced. To this end, the National Academy of Science has recently commissioned a study on the integrity of research data, with a goal of developing universal standards. It will be vital for journal editors to participate in this dialogue with the scientific community, to help devise effective and practical standards that can be applied to the published literature. This is clearly not an issue that should be left to the editors of a few “high-profile” journals to decide for the community, but rather one that the community needs to decide for itself.

The Hwang committee's report indicates that it is becoming unacceptable for journal editors to hide behind the veil of peer review. Given the massive amounts of time, effort, and public and private funds that now go into research, it is also becoming unacceptable for editors to argue that research fraud will all come out in the wash once others find they cannot repeat the fabricated result. The progress of science depends on the reliability of the entire published record, and journal editors must do their part to ensure that reliability.

References

References
1.
Brauman, J., J. Gearhart, D. Melton, L. Miller, L. Partridge, and G. Whitesides.
2006
. Supporting online material. Committee Report. http://www.sciencemag.org/cgi/data/314/5804/1353/DC1/1 (accessed December 29, 2006).
2.
Couzin, J.
2006
. Stem cells…and how the problems eluded peer reviewers and editors.
Science.
311
:
23
–24.
3.
Kennedy, D.
2006
. Responding to fraud.
Science.
314
:
1353
.
4.
2006
. Standards for papers on cloning.
Nature.
439
:
243
.
5.
Couzin, J.
2006
. Scientific publishing. Don't pretty up that picture just yet.
Science.
314
:
1866
–1868.
6.
Cook, G.
2006
. Technology seen abetting manipulation of research.
Boston Globe.
January 11, 2006.
7.
Fodor, K.
2006
. Panel recommends changes at Science.
The Scientist.
http://www.the-scientist.com/news/display/36969 (accessed December 29, 2006).
8.
Rossner, M.
2006
. How to guard against image fraud.
The Scientist.
20
:
24
–25.
9.
Hwang, W.S., S.I. Roh, B.C. Lee, S.K. Kang, D.K. Kwon, S. Kim, S.J. Kim, S.W. Park, H.S. Kwon, C.K. Lee, et al.
2005
. Patient-specific embryonic stem cells derived from human SCNT blastocysts.
Science.
308
:
1777
–1783.
1

Presumably Nature was included in this statement because one of their editors was on the committee. Given the fact that Nature felt the need to question their own review process in relation to a paper by Hwang and colleagues (4), perhaps their editors should have been testifying to the committee rather than sitting on it.