Bold ideas and critical thoughts on science.

What should a scientist do if he or she realized that there is an error in research? What kind of implications can this have on their future career?

Retraction Watch was launched to provide insights into cases of scientific fraud and grant a window on the retraction process. How does the scientific community react when you publicly report on retractions of scientific papers? How should retractions be received by the scientific community?

While there are some scientists who are concerned that publicizing retractions — “airing science’s dirty laundry,” so to speak — could give ammunition to those who want to defund research, most are very supportive. They realize that the best way to build trust is to acknowledge that errors and misconduct exist, and to show the world how science corrects itself. In other words, the growth in retractions is a good thing.

There’s even evidence that scientists who “do the right thing” won’t see a decline in their career prospects. But that depends on retraction notices that provide the entire story: It’s only true when those notices make it clear there wasn’t fraud or misconduct involved. We’ve of course argued for detailed notices since we launched Retraction Watch.

How have digital platforms and tools changed the way erroneous results are uncovered/discovered? How have they enabled Retraction Watch?

The simple fact that the vast majority of scholarly papers are online today means that more people can read — and scrutinize — them. Using a program like Photoshop, a scientist or layperson can reverse engineer the manipulation that created a problematic image. As I write this, a team has just announced a way to find duplicated images that authors claim represent different results, but clearly can’t. And plagiarism detection software is of course a good tool to screen for plagiarism and duplication — aka “self-plagiarism” — of text.

What has really moved the field forward, however, are platforms such as PubPeer, an “online journal club” where scientists share critiques of papers. A number of corrections and retractions have flowed from those discussions. (PubMed Commons was another such tool, although discussions were not as critical, and the National Library of Medicine recently decided to shut it down.) Moving forward, the growth of preprints in biology and related fields could allow detection of errors before papers are even officially peer-reviewed.

Solely by control or active watching the problem of intransparency of the retraction process cannot be solved. In this context, research ethics matter a lot. How could  better research ethics evolve?

I would argue that sustained attention to the retraction process has already led to some changes. Take the Journal of Biological Chemistry (JBC), for example. For the first five years of Retraction Watch, the JBC’s retraction notices were bereft of content, noting simply that the article in question had been retracted at the request of the authors or the editors. In 2015, that changed, and today the JBC is one of the most aggressive journals when it comes to policing the papers it publishers.

In a larger sense, however, it’s true that preventing all misconduct is impossible, just as expecting to prevent all theft or violence is a fool’s errand. Just as for crimes, the best approach is one that combines prevention, detection, deterrence, and rewards for the kind of behavior we want to promote.

In your opinion: What should academia change in the way it publishes and reviews results? What role should academic and professional education play?

As others have noted, the “winner takes all” approach to credit in science — and therefore to doling out grants and jobs — has created enormous pressure and forces researchers to publish in only a handful of “prestigious” journals. That means everyone is always striving for perfect and earth-shattering results, and that there is little incentive to repeat experiments or admit error.

It sounds simplistic and perhaps even naive, but we all need to remember how to read papers, instead of judging them by metrics. That’s where initiatives such as DORA — the Declaration on Research Assessment — come in. We need to create incentives for replication, for open data, and even for openly critiquing others’ work if we want to see changes.