Bold ideas and critical thoughts on science.

Peter Weingart on changing perceptions of science's role in society, safeguarding autonomy, and the concept of dual legitimacy for scientific knowledge in policy decisions.

Academic research enjoys a high level of trust among the society in Germany, not least because of its autonomy that is granted by the constitution. At the same time, the public expects research to leave its “ivory tower” and take on a more active role in addressing complex societal challenges such as the Covid-19 pandemic or climate change. Engaging in public debates and political decision-making constitutes a conflict of interests for researchers: How far can they go and when is their scientific autonomy threatened? Elephant in the Lab talked to Prof. Dr. Peter Weingart – a prominent mind in debates around scientific policy advice – about the transformation of the scientific experts’ role and current threats for researchers who communicate with publics outside of academia. 

In Germany, you have actively shaped the discourse on scientific policy advice since the 1980s. Back then, the politicization of science was a prominent topic and the perception that science could lose its autonomy and legitimacy if it gets involved in societal and political affairs. What has changed in public perception since then?

That is a huge question of course. On the one hand, it can certainly be said that science has moved closer to society and there are a number of analyses on this. In this context, I’m thinking, for example, of Helga Nowotny’s concept of “Mode-2” knowledge production, about a collaborative, interdisciplinary and application-oriented approach with the participation of different actors not only from research, but also from industry and society. This trend can also be observed in science policy, when “public engagement of science” or “citizen science” is promoted. But you have to look more closely to see what is actually happening beyond the science rhetoric. Applied research is not new, it has always existed. If it’s more about the extent to which science interferes in societal political affairs, you can compare two cases that are very visible currently: Climate change and the COVID-19 pandemic. Lets start with the first example. Science has gained considerable definitory power over the last few decades on climate change. There are a number of activist groups here who say “follow the science” and thus start from the expectation that scientific evidence will influence policy decisions and that it cannot be ruled out. But the reality is that you can very well just deny the evidence and say that climate targets will not be met and that we need to adapt as an alternative. 

What happens when science itself becomes activist and what does that mean for the credibility of researchers and science in general? I’ll take the example of Stefan Rahmsdorf, a climate scientist who is also an activist. Of course, it depends very much on the individual case, but one of the biggest dangers, in my opinion, is that as a researcher you are drawn into the political process and your arguments are not perceived as scientific evidence, but as a political opinion. Then it’s one political opinion against another. 

Academic research enjoys a high level of trust in our society, but this is strongly related to its autonomy. In Germany, scientific autonomy is protected by the constitution just like freedom of the press; this is not necessarily the usual case, even for other democracies. But criticism is also voiced when there is too much independence, under the negative keyword “ivory tower” or science that is isolated from society. These voices argue that science should justify itself to society in a certain way, not least because it is also being paid by it. The crux is for academic research to stay an independent voice, freed from special interests, guided by its own methods and incorruptible, but also serving the common good of society. 

Is it possible to prevent science from losing its autonomy and thus social trust when it tries to have a societal impact?

The more science involves itself in political decisions in an activist sense, the more it loses autonomy and legitimacy. This can be clearly seen in the second example I mentioned: science communication during the COVID-19 pandemic. In this case, the involvement of scientific experts in political decisions was almost immediate and very direct, and this was part of the problem. Additionally, the pandemic directly affected many people, and they all had their own opinions on these developments. Moreover, during the first months, there was significant uncertainty, even within the scientific community itself. Researchers like Caspar Hirschi and Alexander Bogner criticize that politics relied on scientific expertise, even though the scientific community did not yet have all the answers. But no one can escape from this trap. Some countries have tried to ignore available evidence with devastating consequences. For example, the USA, where the ex-president Donald Trump suspended his own advisor, Sweden adopting the opposite model entailing no lock-down, and Italy in the beginning of the pandemic. The price can be counted in the number of deaths. Of course, there were deviations from the prevailing opinion and controversial discussions within the scientific community, which we also could see in our own country. This is the classic situation that occurs when there is no certainty, but after a few months, we already knew much more. Furthermore, people from other disciplines were involved to gain a broader perspective. Now, after almost three years, we are much wiser and know that school closures, for example, were not necessary. These examples show that especially when the crisis presents an immediate threat – such as the pandemic compared to climate change – science risks losing its autonomy and credibility even more. It gets dragged into political decision-making, and that is not necessarily a good thing.

In your opinion, has the role of experts changed over the last decades?

First of all, it can be noted that the current form of systematic government consultation by science has only existed since the end of World War II. Subsequently, the advisory role has significantly gained importance, and the number of advisory bodies has increased dramatically in most Western countries. In the 1970s, during the nuclear debate, it became apparent that scientific advisors were being politicized. Both governments and opposition parties or NGOs acquired their own experts, as this knowledge provides one of the highest forms of legitimacy. Since then, public debates take place in the configuration “experts against counter-experts”.

On one hand, this is good news because it makes scientific discussions accessible to the public. This represents a significant change in the role of experts since, in the past, advisory processes were often intentionally kept secret. With increasing public visibility, the role of the expert becomes also more political. Expert arguments criticizing other experts and vice versa weaken the persuasive power for each individual, making the situation quite complex, unless there is a high degree of consensus within the scientific community, where disagreement is no longer possible. Typically, scientific discussions start with a high level of dissent, then further research is conducted, and eventually, most researchers come to an agreement as research approaches its endpoint.

A very illustrative example in this context is the debate about the carcinogenic effects of cigarette smoke. In this case, the tobacco industry resisted regulations for a long time, criticizing the analyses coming from the scientific community, and with great success, as they were able to uncover errors. However, this led to independent research improving its analyses, and in the end, the tobacco industry had no choice but to admit that cigarette smoke is carcinogenic. We have witnessed the same pattern in discussions around climate change. Initially, it was a highly uncertain hypothesis, but now there are very few people who doubt the phenomenon itself. This shows that counter-expertise, so to speak, has played a positive role.

During the Covid 19 pandemic, we observed that experts make themselves vulnerable when they go public. What do you think are the risks and dangers for individual researchers?

There is a whole range of phenomena where scientists who have spoken out publicly have become targets of hate speech or attacks. This is the risk one takes when expressing oneself in public discussions on controversial topics. It depends of course a lot, I believe, on specific subjects that are being discussed. Some issues are relatively harmless, as they may not deeply affect citizens, and therefore, people might not become as agitated. However, during the pandemic, the situation was different, and vaccine opponents also became vocal, attacking not only experts but also politicians who made decisions.

How can scientists be protected from such attacks on an institutional and individual level?

Scientists who step prominently into the public arena must be prepared to become subjects of political controversies. This can extend to the point of hate speech and similar forms of hostility. I haven’t yet reflected a lot about protective measures in detail, but my initial response would be that they are not easily protected, because this kind of communication is governed by principles of freedom of opinion, speech and academic freedom which sometimes foresees robust and very critical debates. However, what has to work are protective measures for scientists against threats and attacks according to the law.

How realistic is the concept of “dual legitimacy” of scientific knowledge used for societal purposes? According to this concept, expertise that influences political decisions should be evaluated by a diverse group of researchers while also gaining the approval of the majority of the population.

This is a theoretical thesis, so the question is not about its practical feasibility. Democracies are legitimized through elections, and adding legitimacy through knowledge naturally complicates the decision-making process. As mentioned earlier, a government can avoid recognizing certain scientific findings for a long time, but eventually, the population may become dissatisfied when someone like Donald Trump constantly denies climate change while wildfires ravage California. At that point, people realize that something is not right. The question is when this tipping point is reached, and the scientific evidence becomes clear enough to gain public support. Most cases in science communication are not as clear as this example. Often there are controversial opinions within the community, different levels of probability on an outcome, allowing policymakers to evade sticking to scientific evidence, claim that there are many other factors and that thus researchers don’t have to be trusted. As you can see, in reality, it’s a highly complex matter. However, we can say that in democratic countries, enlightment has prevailed, and as a result, this relationship of legitimacy between established knowledge and political decision-making holds rather true than not.

What conditions still need to be created in Germany to enable optimal scientific policy advice? And are there international best practices?

Years ago, at the BBAW (Berlin-Brandenburg Academy of Sciences and Humanities), we tried to compile best practice examples in scientific policy advice and also consulted international experts on how certain advisory bodies function in their countries. According to their input, there are certain principles and frameworks for establishing advisory configurations, which vary, and this illustrates how such processes work effectively. In Germany, it is legally stipulated that the Council of Economic Experts (Sachverständigenrat zur Begutachtung der gesamtwirtschaftlichen Entwicklung) submits its analyses to the government, which is obliged to react to them. This means that the advice conveyed by such a body cannot simply be disregarded. To somewhat depoliticize the advisory role, there is another regulation: this council is not allowed to give specific recommendations to the government; they are expected to describe the economic situation and substantiate it with data. Interestingly, it often circumvents this requirement by defining scenarios, saying, “if we were to increase interest rates by two percent, then the following would happen.” This way, indirect recommendations are still given. There are other logics according to which advisory bodies are structured, take, for example, the Enquête Commissions (Study Commissions), where the parties appoint experts based on their respective percentages in the German parliament. Here, the advisory process occurs directly through discussions between experts and politicians. Interestingly, it was observed that when the consultations are far from election times, they serve as a useful instrument to educate parliamentarians about the current state of expertise on a particular issue and make them wiser from the outset. However, as elections draw closer, the process becomes politicized again, the positions harden, and the instrument becomes less efficient.

This delicate balance, given by the mentioned dual legitimacy, can develop in one direction or the other within a certain period. There is no one-size-fits-all solution to always keep it on an even keel. And that is the most important conclusion – fundamentally, there is no universal model of good and efficient advice. But all models that are chosen must ensure the maximum independence of scientists and politicians. In other words, politics should also have the right to be independent from scientific expertise. To achieve this, effective science communication is necessary.