Bold ideas and critical thoughts on science.

Peter Weißhuhn, Martin Schmidt

Impacting the ‘real world’: out of sight for science?

The conflict for scientists and research evaluation between scientific impact and tackling societal challenges.
19 March 2018

What is the overarching problem with scientific impact?

 

Scientific impact can be understood as the influence on other research, for example by providing confirmative or contradicting results, new research questions, or breakthrough methods. It is usually measured by bibliometrics, like citation counts and publication numbers (Ravenscroft et al. 2017). In addition, academia is also about personal reputation and prestige in a scientific community.  Nowadays, publishing results in English-language international high impact factor journals is awarded most.

However, scientific excellence does not necessarily cover the contribution of research to solving societal challenges, like the Sustainable Development Goals (SDG). On the contrary, “[s]cientists don’t get rewarded for having real world impact” says Achim Dobermann*.

Actually, what is ‘real world’ impact? Some understand societal impact (Wolf et al. 2015) and others see the influence on public values (Bozeman 2003), while subsuming practical, political, social, environmental, and economic changes. A researcher seeks—especially in basic research, but also in applied science—to answer a research question developed to address knowledge gaps in research and their own curiosity, and to publish his or her work frequently. Still, non-scientific users of research results are likely to not read scientific journals and not all of them necessarily speak English.

Further, we think a researcher’s work is more than publishing methods and results. For example, what about lectures as societal impact? Training scholars and students is a valuable task for knowledge transfer that potentially heavily influences the future work (and impact) of these. What about many scientists’ work as moderators or facilitators in transdisciplinary research, like in climate adaptation initiatives or scenario developments?

Seeking good measures

 

Research impact assessment

Looking at the state of the art in research evaluation, it’s hard to find a measure that is comprehensive, sensitive, replicable, unbiased and safe against gaming, all at once. However, only “counting the countable” does not solve this problem. On the contrary, the increased use of off-the-shelf metrics, which serve narrow views and interests, subtly turned from accompanying information to the virtual main quality criteria for the assessment of researchers, research projects or research institutions. All other qualities, like consulting, teaching, or entrepreneurship, tend to be overlooked or underestimated. Again, a one-size-fits-all approach does not work and a — probably diverse — combination of quantitative and qualitative measures seems much more promising.

In general, we could learn from front-running research concepts, like citizen science, or transformative science (WBGU, 2011), from policy impact and research impact studies that include intended and unintended, positive and negative, long- and short-term impacts of research, while covering all three dimensions of sustainability, i.e. environmental, social, and economic impacts (Weißhuhn et al. 2018).

Spaapen and Drooge (2011) suggested to measure productive interactions between scientific and societal actors. A similar idea now is implemented in Australia, called the Engagement and Impact Assessment, which defines engagement as the interaction between researchers and research end-users outside of academia, for the mutually beneficial transfer of knowledge, technologies, methods or resources. Look up the the report on the pilot period in 2017 for details on indicators and measurement.

Proposal and Project Evaluation

Partly, the problem is already in the cradle: in grant applications or proposals scientists are evaluated for their scientific impact. Rarely their communicative skills or their real world impact are crucial. However, to justify research funding – among other institutions – the European commission increasingly demands a research contributions to sustainable development, i.e. the proof of ‘real world’ impact is a mandatory evaluation criteria for Horizon 2020 proposals. This is an important step to give ‘real world impact’ more importance.

There is a also conflict between project duration and the time it would take to tackle societal challenges sustainably and scientifically in a project. Too often these projects are tied to people rather than goals – for example the already mentioned SDGs. Grants and their durations should be diversified in a way that we can have long-term research projects along with – if necessary – short-term blue sky research. Insisting that all projects be three-years project is less conducive to innovation. The permanent time pressure, due to excessive use of fixed-term contracts, is a mental stress and potentially causes a brain drain in public science.

Altmetrics

Although hard to quantify, the non-academic impact from communication activities seems to be important (Lomas J. 2000, Smith R. 2001). This would need information from social media posts, press releases, news articles and political debates stemming from academic work (Ravenscroft et al. 2017). This seems to be the approach of Altmetrics, which try to enrich the assessment of research by including all media outlets next to publication and citation number. Subsequently, a top Chinese University implements social media assessment for researcher evaluation, measuring impact of researchers comparable to “influencers” on online platforms. Although gaining more importance with respect to the attention economy**, visibility is just one step towards (a potential) impact. Altmetric hardly measures real world impact, but rather new-media buzz. Such metrics run the risk to reduce impact to publicity and therefore incite confrontational and clickbait publications.

Another critical point of Altmetrics and similar digital services for research assessment is the amplification of the problem of living in a “filter bubble”, i.e. individual researchers receive more and more information through filters to distinguish relevant from distracting information. But where do we get, if research results are only noticed if some algorithms assess them as relevant (Kosmützky 2017)?

Ways out?

 

1. Science communication
Scientists communicate their research beyond academic journals as speakers, commentators, evaluators, and consultants. In the last years, blogs, tweets, podcasts, and webpages have also emerged as valuable platforms. Digital communication is actually great as it fosters transparency and can be perfectly tailored to the needs and interests of society (Birge et al. 2015).
In the light of the flood of scientific articles published these days, a shift from ‘research & publish’ to ‘research & communicate’ might be the answer. However, the mistakes  of biased measurements influencing researchers’ behaviour by (self-)control or non-insight driven incentives (Kosmützky 2017) should not be repeated.

2. Teaching and Coaching
Most scientists are not professional communicators. Not everybody wants to be an expert in this, but for those who do, there should be training. There are “Impact Schools” that give insights into relations to media and society, business, or politics and also podcasts that provide tips on enhancing the impact of research.
Beyond intrinsic motivation, there should be other incentives for e.g communication or teaching. Besides excellence in research, the UK, for example, implemented a scheme called Teaching Excellence and Student Outcomes Framework (TEF) and they are not alone on this frontier: Peter Imboden suggests to supplement the well-financed German Research Association (DFG) by a German Transfer Association (ZEIT 47/2017) while Manfred Prenzel sees a need for a German Teaching Association (ZEIT 18/2017).

3. Transdisciplinary science
For having real-world impact, researchers may not only communicate their findings to the society but get involved with it. This would mean to really leave the framework of disciplinary research, not only to working across disciplinary boundaries (interdisciplinary) but actually transcending the world of disciplines entirely. Transdisciplinary research projects should involve relevant stakeholders, i.e. actors that can affect or be affected by the research activity, from the beginning.
A participatory research project would be co-designed by researchers and stakeholders to assure that the co-produced results are relevant and easy to take up for action, which would result in real-world impact. Of course, skills in moderation and communication with lay people seem required for success. The analysis of so-called participatory impact pathways is established in some research areas (e.g. development research, agricultural research), but may help in other areas also to increase impact.

4. Transformative science***
Science does not exist in the ivory tower but is part of the society. It has the responsibility to take up current and future societal challenges and to infer interesting and relevant research questions from them. Transformative science goes even further and expands the responsibility of science from only analysing and monitoring environmental, social or economic changes, to actively promote a sustainable development of the society (e.g. Schneidewind and Singer-Brodowski 2013). A specific type of transformative research could be living labs that address real-world problems with real life use cases.

5. Proposals and evaluation
As we pointed out, the problem is partly beyond individual scientists. By adapting research funding, especially the inherent application and evaluation procedure, academia has the chance to adapt its very fundament. For example, funding organisations could demand from their applicants to communicate their science in diverse (digital) ways. Similar to publication costs, financial resources should be implemented into the project application. Hence, with explicit money for communication, professional agencies could be involved for targeted communication, e.g. to political decision makers. This, of course, has to be reviewed after the projects end or even during implementation.

Acknowledgment

We want to thank Brooke Struck for a fruitful conversation on the topic and his help in language editing. In his blog, he wrote about the history of and recent developments in research impact.

* Doberman is the Director and Chief Executive of Rothamsted Research, one of the oldest agricultural research stations in the world running experiments for more than 150 years continuously.
** In face of ongoing exponentially increasing publications since the onset of modern sciences (Jinha 2010), attention became the major currency (Franck 1998) and at the same time is bottleneck to acquire new knowledge (Kosmützky 2017).
*** In the sense of a science that actively promotes transformative processes in the society (WBGU 2011, p. 374). Not in the sense of the US National Science Foundation that defines transformative research limited to radically changing ideas, discoveries, or tools that impact science, engineering, or education.

Author info

Peter Weißhuhn graduated in geoecology and thereby is educated to understand the complex interdependencies between multiple environmental factors. His PhD is about analysing the vulnerability of biotopes with a landscape ecological approach. He receives a doctoral scholarship from the DBU (German Federal Environmental Foundation) while he is working at the Leibniz Centre for Agricultural Landscape Research. His additional scientific interest is in research impact.

Martin Schmidt is an agricultural and soil scientist by training. Currently, he does his PhD in environmental modelling at the Leibniz Centre for Agricultural Landscape Research and is a visiting scientist at the University of British Columbia. Besides that, he is an associate researcher at the Alexander von Humboldt Institute for Internet and Society. One of the most important things to him is the integrity of science.

Digital Object Identifier (DOI)

https://doi.org/10.5281/zenodo.1183386

Cite as

Weißhuhn, P., Schmidt, M. (2018). The conflict for scientists and research evaluation between scientific impact and tackling societal challenges. Elephant in the labhttps://doi.org/10.5281/zenodo.1183386

References

Collapse references

Bozeman, B. 2003. Public value mapping of science outcomes: theory and method. In Public Value Mapping for Scientific Research. Washington: Center for Science, Policy and Outcomes.

Franck, G. 1998. Ökonomie der Aufmerksamkeit. 9. ed, Edition Akzente. München, Wien: Hanser.

Jinna, AE (2010) Article 50 million: an estimate of the number of scholarly articles in existence. 23(3): 258–263. http://doi.wiley.com/10.1087/20100308

Kosmützky, A. 2017. Altmetrics & Co. und die Freiheit der Forschung. Kritische Ausgabe. Zeitschrift für Germanistik & Literatur 33:7-12.

Lomas J (2000) Connecting research and policy. ISUMA Canadian Journal of Policy Research 140–144.

Ravenscroft, J., Liakata, M., Clare, A. and Duma, D. 2017. Measuring scientific impact beyond academia: An assessment of existing impact metrics and proposed improvements. Plos One 12(3), p. e0173152. https://doi.org/10.1371/journal.pone.0173152

Smith, R. 2001. Measuring the social impact of research : Difficult but necessary. BMJ : British Medical Journal 323(7312), 528.

Schneidewind, U., Singer-Brodowski, M. 2013. Transformative Wissenschaft. Klimawandel im deutschen Wissenschafts-und Hochschulsystem. Marburg: Metropolis.

Spaapen, J., Drooge, L. 2011. Introducing ‘productive interactions’ in social impact assessment. Research Evaluation 20 (3):211–218. https://doi.org/10.3152/095820211X12941371876742

WBGU. 2011. Welt im Wandel: Gesellschaftsvertrag für eine Große Transformation. Berlin: Wissenschaftlicher Beirat der Bundesregierung für Globale Umweltveränderungen.

Weißhuhn, P., Helming, K., Ferretti, J. 2018. Research impact assessment in agriculture—A review of approaches and impact areas. Research Evaluation 27 (1):36-42. https://doi.org/10.1093/reseval/rvx034

Wolf, B.M., Häring, A.-M. and Heß, J. 2015. Strategies towards Evaluation beyond Scientific Impact. Pathways not only for Agricultural Research. Organic Farming 1(1). https://doi.org/10.12924/of2015.01010003

0 Comments

Continue reading

Between societal relevance and autonomy

Between societal relevance and autonomy

Peter Weingart on changing perceptions of science’s role in society, safeguarding autonomy, and the concept of dual legitimacy for scientific knowledge in policy decisions.