Bold ideas and critical thoughts on science.

A summary of the results of a workshop held by our authors on issues related to the measurability of the impact of the Social Sciences and Humanities.


What does social impact mean for the Social Sciences and the Humanities (SSH)? How can it be measured and evaluated? This was the topic of two workshop discussions between science studies scholars and representatives of SSH disciplines. This article summarizes the workshop results.

Benedikt Fecher

Science is increasingly expected to address and help solve societal challenges. This recently became evident in the crisis around the Covid-19 pandemic: In particular, virologists and epidemiologists were expected to use their expertise to help overcome the pandemic. But even before the pandemic, the word of science played an important role in addressing complex societal challenges, for example on topics such as climate, migration or digitization. The expectation to address these issues is accompanied by a discourse around measuring, evaluating, and promoting the societal impact of science. Science is thus no longer assessed only by internal measures, but also by its benefits for business, culture, public administration, health and the environment.

Nataliia Sokolovska

This development is particularly delicate for social science and the humanities (SSH), because for a long time the social relevance of research was assessed using economic indicators. For a sociologist, a historian or a political scientist, however, the number of patents or spin-offs is largely irrelevant. Moreover, social science scholars are part of the subject matter they research. This raises particular questions of boundary setting and feedback for this set of disciplines. For us, this provided the occasion to open discussions with the workshop participants on what the demand for societal impact means for SSH scholars and where challenges and perspectives for measurement and quality assurance arise.

Challenges for measurement

To begin with, the participants in our discussions were largely in agreement that SSH expertise is necessary to address complex societal challenges, such as assessing the social consequences of the Covid-19 pandemic. Nevertheless, evaluations of societal impact were also seen as a possible interference with scientific freedom and an additional burden for the researchers, especially since they do not always learn the skills needed to communicate with non-scientific actors. It was therefore particularly important that any potential evaluation do justice to the diversity of the SSH and their publics. 

In addition to these, a number of specific challenges for valuing the societal impact of the SSH were discussed. In essence, these were:

  • Reactivity: One feature that SSH disciplines have in common is that they deal with reactivity: their objects of study speak back and react to research results. For these disciplines, questions of boundary dissolution and demarcation are therefore of particular importance, especially since a stronger evaluation of societal impact might also feed back on the inner workings of the disciplines, as the following quote from a representative of the humanities makes clear:

“What kind of repercussions does this have for science? Because they are not insignificant. And the repercussions on the inner workings of scientific communication have to be taken into account in this discussion. I am not of the opinion that more [impact] is always better. […] Sciences that are very much in [public] focus suffer in their internal consistency and then often have to rebuild themselves afterwards […]. So I think it should be discussed in a different way […]: what does which development mean for society as well as for the sciences?” 

  • Temporality: The dynamics of public attention are fast-paced and societal stakeholders expect immediate benefits from research. Usually, this does not correspond with the working modes and relevance logics of the social sciences. Relevance often unfolds with a time lag, making it difficult to measure and evaluate. This becomes clear in a science studies scholar’s reflection on the role of social science expertise in the Covid-19 pandemic:

“What may be true is that [the social sciences] have not taken place on that first stage, let’s say in the debates and other talk shows, where maybe medicine has been more present in the last 12 months, which in my mind is related to the fact that the issues that have been discussed there are more short-term and social sciences will be dealing with this crisis for a very long time and also don’t always have the short-term answers ready.”

  • Visibility: The societal impact of the social sciences does not necessarily unfold in the mediated public sphere and is often more conceptual in nature. Along these lines, scholars referred to consulting activities, education programs, and broader contributions to debates. Basing the measurement of social relevance on media presence was thus particularly called into question by some discussants, as it captures attention instead of relevance (see also Matthias Kohring’s article).

“For I find the mere media presence […] short-changed to talk about impact from the humanities and social sciences. The perception of the general population is an important point, but it cannot be the central one for negotiating the question of impact of the humanities.”

  • Complexity: Social science disciplines are continuously developing their language to describe and operationalize the complexity of social contexts. When entering into dialogue with different social actors, a reduction of these complexities in language is necessary, which makes expertise susceptible to everyday hypotheses or accusations of triviality. Societal relevance claims can thus come into conflict with scientific relevance claims.
  • Ambiguity: Knowledge from social science research is rarely unambiguous and robust in the positivist sense. The variety of conceptual and methodological approaches and different paradigms also means that one situation can be assessed in multiple ways. As a result, expertise from the social sciences does not necessarily produce unambiguous evidence, but opens up a space for discourse.

All of these challenges are still hardly reflected in science policy. The policy paper on science communication published by the Federal Ministry of Education and Research (BMBF) in 2019, for example, calls for a “cultural change toward communicating science”. The BMBF calls for science communication that fosters dialogue and is generally accessible. To this end, the Ministry would also like to expand the impact measurement of science communication formats. However, the policy paper barely addresses feedback effects on science. Relevance is reduced to the measurable impact of formats and broad public impact is advocated for, as a proxy for relevance. The important paper thus unnecessarily truncates the societal impact of research and fails to do justice to the social sciences. Günther M. Ziegler, winner of the Communicator Award, has formulated some clever and critical thoughts on the paper here. Similarly the #FactoryWisskomm, a think tank for science communication launched by the BMBF, discussed these abbreviated concepts critically and constructively.

Perspectives on quality and measurement

In the workshop discussions, some consequences for evaluating societal impacts were drawn from the aforementioned challenges: There was broad consensus among the discussants that it is difficult to clearly define the value of impact in the social sciences, as it is sometimes ambiguous, diffuse, and counterintuitive. Regarding the latter, it could be said that social science research does not necessarily lead to solving a problem, but can question the problem itself, as the following quote from a science researcher illustrates:

“Often social scientists do not want to appear disinterested in public or show empathy in the broadest sense. On the other hand, they deliberately set out to provoke or stir up controversy, to stir up conflict. That is perhaps also part of the quality criterion here; in other words, the question that then arises, how do we evaluate interaction when provocation is used, when irritation is used, when criticism is used.”

There was also consensus that the quality of impact can only be understood in a context-specific way, in the sense that it is dependent on the person, the problem and the time. This observation weighs particularly heavily in dialogic and participatory arrangements. Finally, the diversity of actors potentially increases the diversity of quality expectations. For these reasons, a rigid, purely quantitative measurement of societal impact, especially at the level of the individual researcher, should generally be avoided. 

Some science studies scholars noted that indicators can provide orientation, set incentives, and create comparability, but that they should be used primarily to inform. They should not be formulated in general terms, but rather in a field-specific, situational, and context-dependent manner. Some spoke of a “growing” set of indicators or a portfolio of indicators that maps different target dimensions and allows individuals to set priorities and also to adjust them. This goes hand in hand with a selective openness of evaluation procedures, i.e., forms of evaluation that are goal-oriented and adaptable. Evaluation should primarily promote conditions, which allow for societal impact. 

Along these lines, formative forms of evaluation, i.e., forms that focus on learning in the process, were particularly advocated. The SIAMPI process developed by Jack Spaapen, Leonie van Drooge and colleagues was mentioned as a promising example of such a processual evaluation, which follows the principle of productive interactions. A productive interaction refers to an exchange between scientific and social stakeholders in which socially useful and scientifically robust knowledge emerges. The SIAMPI method focuses on the process of this interaction rather than reactively assessing the outcome. However, even SIAMPI could only conditionally capture the counterintuitive contributions outlined above, which are precisely those that do not have the solution of a problem in mind, but rather question it.

Alternative measures for impact promotion

Due to the difficulties of measuring the societal impact of the social sciences and humanities, alternative support measures were also discussed, some of which are presented here:

  • Decentralized support structures: Communication and knowledge transfer at scientific institutions in Germany is usually organized by central leadership. Some discussants argued in favor of building up decentralized communication competencies, i.e., located at projects and faculties, in order to do justice to the complexities of research and the public.
  • Career profiles and target agreements: Researchers are increasingly required to be able to fulfill all three missions (research, teaching, communication). In order to counter this functional overload, career paths with a functional focus were discussed (such as professorships with a focus on transfer, and impact managers based on the UK model). In addition, some discussants advocated a “free semester for transfer”.
  • Extrapolation of best practices: Some disciplines already have implicit quality assurance procedures for external communication that are not yet known outside the discipline (such as observation patterns used in the social sciences). Some discussants advocated analyzing and learning from these practices. Similarly, it would be possible to learn from concrete cases in which social impact has been successful and to derive perspectives for action.
  • Training and education: Many discussants were in favor of anchoring science communication, which they see as a prerequisite for societal impact, more firmly in scientific education and training. However, these measures should go beyond common media training, be science-based and reflexive.
  • Disciplinary self-understanding: Some discussants advocated making societal impact the subject of disciplinary reflections, especially in professional societies. The goal of these reflections should be to become aware of the possibilities and limits and to articulate them.

Since the impact of the social sciences is difficult to evaluate as such, the measures outlined here focus on creating the right conditions for social science and humanities research to be impactful. Because impact is not only to be understood as monocausal effects, it makes sense to think about it systemically, and foster appropriate conditions for societal impact.

Focus on conditions for impact

The impulses from the workshop discussions can add some nuances to the debate on evaluation: It is difficult to decouple the recording of effects from the research process itself, and in addition to the effect itself, the conditions for effect should also become a stronger focus of evaluations.

However, the discussions cannot possibly do justice to the diversity of social science disciplines and their concrete contributions to society. That these subjects are important, for example in the context of issues of social cohesion, impact assessment, or the preservation of cultural heritage, cannot be denied. Sometimes, however, it is difficult to grasp this importance in a context-independent and unambiguous way. In this respect, we hope that the impulses from the two workshop discussions will sensitize people to the special requirements of measurement and assessment. In this context, formative procedures could be one way of ensuring the quality of impact, alongside other instruments aimed at creating conditions for good social impact. In this context, goodness must not only be understood as impact and must also include the perspective of research (i.e., what impact is scientifically legitimate?). Last but not least, people who critically deal with science communication also urge that it should not be about more, but about better science communication. Evaluation can contribute to better communication if it strives to improve the conditions for science communication, anticipates disciplinary specifics and feedback, and does not look only at the impact of formats and measures.