Bold ideas and critical thoughts on science.

Janet Hering's take on reconnecting academic research with societal needs.

Let’s start with the obvious. Evaluation and assessment are part and parcel of the scientific profession. Universities want to hire the best faculty, funding agencies want to support the best projects, and journals want to publish the best papers. To this end, scientists serve as members of hiring and promotion committees and on panels for funding agencies. We also write evaluation letters and reviews of manuscripts and proposals. Of course, this all takes time, which could otherwise be spent on our own research.

These competing demands on our time tempt us to rely on indicators as proxies for quality. But saving time does not justify disregarding the many critiques of indicators such as the Journal Impact Factor (JIF) and h-index. Indicators are often intransparent and fail to accommodate field-specific characteristics; furthermore, they can be subject to manipulation and create perverse incentives, in the worst case, compromising scientific integrity (Alberts et al., 2015; Hering, 2018; Hicks et al., 2015; Klein and Falk-Krzesinski, 2017). This issue becomes even more pressing when the scientists being evaluated are conducting applied research whose outputs are not well aligned with conventional indicators (Hering et al., 2012; Moher et al., 2018).

I would certainly not argue that we should ignore an individual’s professional record, but we also need to be aware of the anchoring effect (Kahneman, 2011) of our first glance at a CV, list of publications or even the affiliations that commonly appear on the cover page of a proposal or manuscript. But what if the first information that we received about someone applying for a faculty position or submitting a proposal or manuscript did not provide us with shortcuts for assessing quality?

STARTING WITH A PLAIN LANGUAGE STATEMENT

I would propose that all reviews (i.e., of job applications, funding proposals, and manuscripts) start with the reviewer reading a plain language statement (AGU, 2018). This document should be concise, clearly-written, and focused on the scientific and societal significance of the job application, funding proposal or manuscript.  Authors would be instructed to focus on content and to avoid, as much as possible, mentioning specific institutions or other aspects of their “pedigree”.

Reviewers would be asked to assess the plain language statement(s) (i.e., as exceptional, adequate or inadequate) before they are given access to the complete application file, proposal or manuscript.  This would help to anchor their subsequent responses to content rather than to proxies for quality (i.e., indicators).

This process would have the greatest benefit for bodies like hiring committees or panels for funding agencies that have to compare a large number of potential applicants or proposals.  Faced with this workload, committee and panel members are more likely to feel overburdened and to be tempted to take shortcuts. Pre-assessment of the plain language statement could encourage committee or panel members to consider applications or proposals with exceptional statements even if the applicants are not the strongest when judged from the perspective of conventional indicators. Taking this even further, applications with “inadequate” statements could be excluded from further consideration.

Even when the evaluation is made on a case-by-case basis (i.e., for promotion and tenure decisions or individual reviews of manuscripts or proposals), pre-assessment based on a plain language statement could help to reinforce the recommendation that “the candidate should be evaluated on the importance of a select set of work, instead of using the number of publications or impact rating of a journal as a surrogate for quality” (Alberts et al., 2015).

I would certainly not claim that such adaptations of review processes would change the culture of academic assessment of scientists overnight.  But I think it could go a long way toward shifting the focus from indicators to content. I feel that such a shift is vital to the future success of the scientific enterprise, not only in the context of problem-driven or solution-oriented research but also for truly creative curiosity-driven research.