Bold ideas and critical thoughts on science.

This short analysis is showing up ways of how the quality of scientific policy advice, as an important part of the recognition of scientific activity, can be checked and how these processes and results can be made usable again for science.

In academia, there is a clear understanding of how the quality of research work is assessed. This is done by academic peers in a peer review process. It is only then, through the discourse of expert opinion, that it is possible to determine whether the quality of a paper is good or poor. The peers themselves also determine when science is excellent without using formal criteria or even indicators. Science thus has a monopoly on quality; it alone can determine what is considered of high or low quality. This procedure, which has been tried and tested for centuries and internationally, is elegant and reduces transaction costs. However, peer review has an disadvantage as well, namely that it works exclusively within academia and draws on the expertise of academics. Yet, when the boundaries of the academic system are crossed, academic peers lose their authority because other indicators then take effect and the discourse space is opened.

This is problematic because science is increasingly confronted with the demand to produce more useful and usable results for society. For example, institutional evaluations of scientific institutions are increasingly asking and assessing how the respective institutions make their scientific results available to society and how they contribute to solving societal challenges. What is needed is more societal impact! An important instrument for producing this is scientific policy advice in the policy field. However, in this case the internal scientific quality assurance instruments are no longer sufficient, because this kind of impact cannot be assessed just by academic peers and therefore cannot be accommodated in the scientific reputation system alone. The question arises as to how the quality of scientific policy advice, as an important part of the recognition of scientific activity, can be checked and how these processes and results can be made usable again for science, and what procedure could be used for this.

Assessing Quality in Collaborative Arrangements

The situation is complicated by the fact that scientific policy advice is becoming more and more differentiated. In addition to linear models, forms of co-production of knowledge between science and politics have gained importance in recent years. The understanding of roles and impacts as well as consulting formats are becoming more differentiated and, of course, have a central impact on the understanding of quality and forms of quality assurance. However, this already identifies a crucial problem: Which of the entities – science, society, politics – involved is set as the central reference? Can this really be answered within science and with the rules applicable there? For politics it will become apparent that the science involved can possibly be won over to cooperate on foreign terrain. That is if the results of the interaction can become part of a new scientific culture of evaluation and be firmly established in the canon of scientific work.

Bridging different kinds of knowledge

The pandemic acted as a catalyst for the long-running debate about the relationship between science and politics, science and the public. It is precisely in the crisis that the different basic understandings of one’s own work are put to the test, and positions seem to stand immovably in space. On the one hand, there is a call for a “clean” separation between the different spheres and thus different reference systems; on the other hand, good reasons are given for seeking a more direct form of cooperation.

The debate about scientific policy advice gains its particular poignancy in the context of a broader discussion about the relationship between science and society, as previously discussed. Ever since academic institutions have existed, there has been a struggle over this issue. The previous “contract” between society and science (Merton 1985) was based on the assumption that these independent production forms and the exclusivity of peer quality assurance will develop high quality results, which sooner or later break through and create the desired usefulness for societal challenges. In the last decades, a debate has started that no longer shares the old formula ‘relevance through excellence’ and demands for more impact of scientific knowledge have increased. What is desired is convincing evidence of the effectiveness of science. In the last 50 years, it has become clear that there is no automatism in academia that would lead to knowledge being legitimized by peers, spreading directly to other sectors and proving its worth. In order to provide this proof, special transitions are necessary due to the constitutive self-reference embedded in the peer review system? (Knie/Simon 2019).

This is particularly evident in the case of an important transfer channel for scientific knowledge, namely policy advice. For it is precisely in times of crisis that public interest in science increases, and the public conducts a performance review, so to speak, of scientific results and interpretations. There is a consensus that the quality of scientific policy advice requires further criteria in addition to internal validation, such as usefulness for the respective field of application (Weingart/Lentsch 2008). Further requirements can be summarized in the desire for: “epistemic robustness, i.e., the resilience and reliability of expertise, even under unknown and varying conditions of application and practice. Epistemic robustness must go hand in hand with political robustness. Only in this way can connectivity and responsiveness to criteria of political legitimacy be established” (Lentsch 2016: 321).

In science research, there have been repeated attempts to develop a typology of both scientific policy advice and expertise (Pielke 2007), considering the role of expert laypersons, which can strengthen society’s trust in science (Collins 2004).

No one disputes any longer that the complexity of the societal problems to which politics must respond has increased in recent decades and that fewer policy fields can be considered exclusively separately; one only has to think of the current energy policy and the war in Ukraine.

Scientific policy advice is based on the idea that scientifically sound knowledge can effectively support political action. In most cases, the starting point is a knowledge asymmetry between scientific expertise and political decision-making. Due to differently distributed resources (including time, specific skills, networking in respective functionally specific communities), the scientific expert ideally has more operational knowledge than the political decision-maker. In the classical model of the social contract, he or she is also given freedoms and endowed with resources for the provision of knowledge.

One ideal conception of cooperation is that of the “honest broker” of scientific policy advice (Pielke 2007). According to this model, four types of scientists are stylized: 1. The pure scientist may produce results that are highly recognized scientifically, but these are considered structurally useless for political action. 2. The science arbiter answers concrete questions posed by political decision-makers, but does not concern himself with their overarching political and social context, let alone with alternative options for action. Accordingly, he can easily serve particular interests. 3. The issue advocate is well aware of these and serves them for resource-economic reasons. It is more of a technocratic model of deliberation. Finally, 4. The honest broker embodies the type of scientific policy advisor who is aware that value judgments find their way into policy advice, including in the selection of scenarios of political decision options. Precisely by presenting alternative options for action, including an ex ante evaluation of their possible consequences, the honest broker follows the idea of not limiting but rather expanding political options for action through selection.

For all the persuasiveness of the honest broker model, the scientist with integrity who puts all options on the table and discusses their possible consequences, the model remains simplistic. Neither the forms of policy advice nor an organizational perspective are addressed. Moreover, inherent in the model is a residue of notions of linearity, that is, the notion: science delivers, policy receives scientific expertise. Against the notion of the honest broker, the “advocatory” model is often brought into play as a counter-model, i.e. a form of scientific policy advice in which science consciously allows itself to be “appropriated” for a specific social or political goal and provides the necessary expertise for this purpose.

In recent years, increased attention has been paid not only to the roles that science plays in policy advice, but also to the processes and formats of advice itself. In addition to the linear model, i.e. science provides expertise in various forms, through interviews, studies, etc., other models doubt the validity of such a division of roles and sequences of actions and are increasingly turning to models that interactively and cooperatively produce politically usable knowledge. In this context, the question arises where cooperation should already begin: for example, from the perspective of a policy-advising scientific institution, whether political actors should or can already play a role in the consultation of research priorities in order to actually be able to meet the needs of politics. How are the actors selected and according to which criteria? How is it counteracted that particular interests prevail? The questions then also concern the “production process,” which is very presuppositional and requires a great deal of coordination. However, the forms do not yet determine the role understandings of individual or institutional policy advice. In the next section, we outline a form of co-production between science and politics that is more characterized by the notion and goal of advocatory policy advice.

Cases for co-creating policy

Mode 2 as a co-production of knowledge was “discovered” more than 20 years ago (Nowotny 2000, Nowotny et al. 2001) which leads to the concept of “socially-robust knowledge”: Nowotny et al. (2001) explain,

“The reliability of scientific knowledge needs to be complemented and strengthened by becoming also socially robust. Hence, context-sensitivity must be heightened and its awareness must be spread. The necessary changes pertain to the ways in which problems are perceived, defined, and prioritized, which has implications for the ways in which scientific activities are organized”.

Nowotny et al. (2001), p.117

Science, politics and other societal actors, as well as academia, work on major societal issues in an inter- and transdisciplinary manner. This also has the advantage for academic science of a “double” validation, so to speak. The peer reviewers are joined by other societal actors who ultimately decide whether the knowledge is applicable or not.

This “ecosystem of expertise” (Doubleday / Wilsdon 2015) has developed rapidly in recent years. So-called “experimental spaces” and “real laboratories” have emerged, in which knowledge is generated jointly in different actor constellations and technical and social solution models are developed and tested (Simon / Knie 2021). They are not exclusively established as new settings for policy advice, but nevertheless make up a significant part of it. Especially in the energy and mobility sectors, whose transformation represents key challenges for societies to halt climate change, such forms of collaboration have been used for policy advice in the last decade. For years, institutions such as the “Agora Energiewende” or the “Agora Verkehrswende” have been of strategic importance for this transformation process, because they represent places of protected exchange to prepare different intervention strategies and then to review and revise their consequences together and in respect of different reference areas. The prerequisites for successful collaboration are clear rules, clarity of roles, a high level of self-reflection on the part of the actors involved, and, of course, a shared sense of trust and a departure from the idea of hierarchical policy advice. Science plays an important role in these forums as a supplier of tested knowledge — with a transparent production process but not a dominant one. In order to initiate, accompany and assess processes of societal transformation, this creates many imponderables that elude empirical research. Findings from the past are not much use as explanations because circumstances and contexts have changed in the meantime, but scientific research needs precisely this ceteris paribus structure to formulate results. Therefore, in the framework of assumptions and hypotheses — without which no empirical research can do — science always drags the past into the future. Progress can therefore be made when non-scientific actors make decisions and create new circumstances, opening up, as it were, a new space for gaining knowledge. Policy advice based solely on scientific work would therefore always be based on path dependency and could not contribute to the desired solutions (Knie, 1989).

Experimental formats with plural staffing are increasingly initiated by politics and embedded in comprehensive funding programs. In 2014, for example, Baden-Württemberg set up the “Reallabore” funding line, the German Federal Ministry of Economics founded the “Experimental Spaces” project group, and the German Federal Ministry of Labor and Social Affairs established “Working 4.0” as a platform for corporate learning and experimental spaces for work innovations. The ministries expect results that are not only scientifically valid, but also practically useful. The constellation of actors is interesting here: politics as a client and at the same time as an equal partner in such a process, which must adhere to jointly agreed rules of the game. Initial experiences with these new forms were reflected upon at the WZB-Mercator Forum Science and Politics (WZB-Mercator Forum Wissenschaft und Politik). It has been shown that, among other things, new approaches to solutions and regulation could be tested and recommendations for legislation could be developed on the basis of practically tested knowledge. The binding nature of the knowledge developed was increased, among other things through the direct participation and involvement of stakeholders from the political arena. The venues for disputes take place in a protected space and not immediately in public debates, which means that exacerbations, rhetorical excess, fundamental position battles can be avoided (for the time being). However, the joint development of knowledge is extremely preconditional in terms of process design, since actors from different social fields must be moderated at eye level. It is crucial to avoid scientific dominance through a hierarchical understanding of transfer. However, a conclusive assessment of this new participatory approach to policy advice has yet to be made, especially since its effectiveness in the further political process has yet to be proven. Internationally, however, these formats have become an integral part of the cartography of policy advice (Doubleday / Wilsdon 2015).

Assessing quality in collaborative settings

But how can work be evaluated and qualities assured in these new settings of scientific policy advice? First of all, it should be noted: There is widespread consensus in the discourse that effects of scientific policy advice are difficult to prove when quantifiable and do not represent a valid factor. From an organizational perspective, of course, everything possible can be counted, such as how often studies are downloaded or read, how often one is invited to political committees, for example, and what kind of resonance arises, furthermore, media appearances are certainly important, and much more. More meaningful criteria would be certainly, if findings of the political consultation would find entrance into speeches or even bills of the German Bundestag. But what does all this say about the effect that the knowledge provided can have in the political process? On the contrary, political consulting institutions could then assume that they can point to successes of their consulting activities through high publications and activities in social networks – often a fatal misjudgement in the sense that such indicators can actually be used to measure “impacts” in the policy field. Above all, consulting formats such as discussions with members of parliament or employees from ministries or even with ministers themselves, which take place in a non-public space and achieve an “impact” in politics precisely by creating an “intimate” discussion situation, cannot be “counted”. Actors from the policy field as well as policy advising think tanks repeatedly point out that it is precisely the kind of face-to-face conversations that are highly valued.

Another problem is that the impact of scientific policy advice as a whole is difficult to assess because the appropriate criteria and indicators are lacking and currently there are not many attempts to invent new indicators. Instead, models such as impact pathways are being developed and tried out (Wilsdon et al. 2015), which rather focus on the process (in the case of policy advice) and are based on good practice. Impact pathway models, in turn, are designed to reconstruct the course of process-intervening actions. The application of impact pathway models – and this seems to be important – takes place during individual phases of a process to be defined, e.g. in the planning, implementation and evaluation of research and development projects or entire programs, i.e. after concrete and intentional actions. Whether and which effects have been achieved in the respective sectors concerned is to be presented by so-called “interventions” of a mostly qualitative nature in the form of retreats, workshops, etc., so that a process-like follow-up is possible, which is particularly important in consulting situations, since the policy agenda can change quickly.

Since a linear relationship between knowledge production and policy-making is less and less in line with reality, quality assurance in this case means above all looking at the interactive processes to see to what extent they actually contribute to ensuring and improving the quality of deliberation. This is about different types of knowledge, which require specific consulting formats in different functional references (information function, observation function, translation function, etc.) and which each produce a different impact.

Conclusion

Returning to the initial question, knowing this complex and above all interactive form of scientific deliberation: how can the quality of the collective action be checked and above all how can this be done in a form readable to the scientific world? The idea is that the processes of peer review, which have proven successful for the inner-scientific discourse, serve as a model for an extension in which the representatives of the extra-scientific reference system are integrated into the process. It is important here that these peers from the non-scientific field have a high reputation and are also representative of the respective field, thus contributing to the legitimacy of the procedure. Analogous to the scientific review board, peers are also defined in the application spectrum of the consultancy, who can jointly assess the results achieved in dialogue without having to resort to a set of criteria or indicators. In this “extended peer review” the same rules apply as those already described for the experimental rooms, namely the understanding as “among equals” with the renunciation of a procedure dominated by science. Once established, an extended review could not only test the quality of scientific policy advice, but would be interesting as a new procedural rule for science in that it could also realign the established peer review by integrating knowledge from outside the discipline and new references, thus counteracting a generally recognized path dependency of this procedure. In the end, this would not only provide a procedure for evaluating scientific policy advice, but also an instrument for increasing the plurality of scientific activity in the search for more impact.