Bold ideas and critical thoughts on science.

Niels Mede on how the rise of populist politics affects academic work, science communication practices of scholars engaging in public discourse and ways to address these challenges.

So, Niels Mede, tell us: who are you? And what do you do? 

I am a post-doctoral communication researcher in the Department of Communication and Media Research at the University of Zurich, and until recently, I was a visiting research fellow at the Oxford Internet Institute. My research centers around science communication, with a focus on public attitudes toward science and communication about science on social media and beyond. A particular focus of my research are critical, sceptical and populist resentments against science, scientists, expertise and scientific methods and how these manifest in people’s attitudes and communication behavior. One of the topics I have been exploring more recently within this focus is climate change, and I mostly do this using surveys of public opinion. 

Do you think that your field is something that can be historicized? Is there a precedent for this kind of work, or is it emergent?

That’s a good question. I think there are two aspects to the answer. Firstly, criticism of science and scientists has existed for decades and even centuries, and secondly, it has taken very different shapes and forms. For example, skepticism of industrial research on nuclear energy emerged already in the Seventies and Eighties – but it still exists. But now it has taken different forms, or may take place on other channels, such as social media. And this is where the second aspect of my answer comes in: criticism, skepticism and distrust towards science have changed and have intensified in certain communication environments, and for certain events and temporal phases, like the Covid-19 pandemic, for example. 

For example, science-related populism has become more relevant in recent times: Populist politicians who voiced skeptical claims about scientific expertise have received quite a lot of support from certain parts of the public. Of course, populist resentment towards science existed previously, even before the “populist wave” of the last decades. However, this “wave” helped science-related populism to gain traction, especially during the pandemic and on social media – but there had not been a conceptual framework or methods to really grasp this kind of populism, which is what I did during my PhD

But there’s a big difference between, for example, the types of discourse you look at, and the letters of objection that would have been written to a newspaper when Charles Darwin’s theories were first published. The intensity levels and the platforms are very different. 

Yes, you have to differentiate between different forms of skepticism or backlash. The one you mentioned may have been perceived as unproblematic or even beneficial. Organized skepticism is a very important part of scientific research. You have to be critical about the methods scientists use, or their funding sources. But what I focus on are the more drastic and often illegitimate forms of science skepticism which ignore the competencies of scientists, deny the norms of science, and reject its ability to provide society with the best available knowledge. These forms of skepticism seek to replace these competencies with common sense and gut feelings, which are arguably not compliant with scientific norms and quality criteria.

Is that how you distinguish between useful scientific skepticism and populist dissent – the use of or ignoring of scientific evidence? 

Yes, that’s one important distinguishing criteria. However, there’s not really a strict line between legitimate and illegitimate skepticism, because you can argue that science-related populism has at least some bits and pieces which are legitimate or justified. For example, one aspect of populism is a participatory demand by the public to have a say in decisions by elites, such as the academic elite. And having a vocal public instance as a check and balance for science may be worthwhile and beneficial to science – so this can be seen as a democratic and perhaps healthy component of science-related populism. But if we go a step further, when it comes not just to participating in science, but to rejecting it, then that crosses the line from beneficial to detrimental. 

In your opinion is there a point at which beneficial skepticism tips over into conspiracy theories and anti-science dogma? Or is the line more grey?

This is a normative question, there is no black and white – it is a continuum. And it depends on whom you ask. Some liberal voices may say: “Well, controversial claims or maybe even conspiracy theories provide some kind of epistemological angle that needs consideration. So even though they are definitely false they need to be part of a democratic discourse.” Other, less liberal voices may say “Well, conspiracy theories are completely illegitimate. And we cannot allow them to have any importance in public discourse.”

And that depends on where one, as a researcher, sits on the spectrum, and how you move through it, depending on what aspect it is that you’re looking at. 

Yes, and it also depends on your personal experience. For example, I have been lucky not to have been offended or criticized by conspiracy theory believers. Other researchers and colleagues have faced serious backlash, so they would be less inclined to say “Well, we have to allow every critic to participate in public discourse.” 

Do you think scholars have an obligation to debunk or rectify false interpretations of their own research, even when these misinterpretations are not necessarily intentional? For example, journalists may misinterpret research because they’re trying to simplify something complex for their readers. Or is it a scholar’s job to do the research, put it out there and then walk away from it? 

I would argue that scholars do have an obligation to rectify false interpretations about their own research, regardless of whether these are intentional or unintentional, because scientists have committed to provide society with reliable knowledge, which doesn’t only mean producing this knowledge. It also means engaging with feedback from politicians, journalists and the general public, which may contain misleading claims, false information, or misinterpretations. In other words, scientists have an obligation, or at least a responsibility, to monitor or to scaffold what society does with their work.

But, having said that, I would add that scientists should also try to discuss and rectify false representations of other researchers’ work, if they have the competencies to do so. This is part of the monitoring and scaffolding function of researchers to make sure that publics do not discard and ignore valuable scientific evidence. That is in line with what Morgan Meyer would call the ‘knowledge broker’ – that is, the agent or mediator between science, policy making and the public. 

Let me play Devil’s Advocate here: does taking this approach not mean that there is a potential risk that research is compromised in order to make it palatable or easily digested? 

There is often a tradeoff between comprehensiveness and accessibility, or understandability and accuracy. That’s probably one of the key challenges of science communication – to really comply with, and achieve, both standards and benchmarks. On the one hand, we have to communicate research as accurately and comprehensively as possible, while on the other hand, make it accessible to publics or other stakeholders such as politicians, journalists and so on. And this means that we have to really consider and evaluate who we are talking to. While some audiences may have prior knowledge of the topics of research, others may not have that. Also, some audiences of scientists, such as journalists, have to comply with the expectations of their audiences. So, scientists have to communicate at different levels. As a communicator you have to keep this in mind, so target group specific science communication is important. 

This is where my research and that of my team come in – we are working on what publics know and what they expect from us as scientists or science communicators. Because if you understand that you can better solve the tradeoff between, for example, accuracy and accessibility.

Has there been a change in the way that scientists talk about their work on topics that are urgent and important, such as climate change?

There has been a change in how scientists talk about their research in general, not only for specific topics like climate change. A couple of decades back, many scientists and science communicators talked a lot about facts, knowledge, evidence, and they put that out there. They still do that, but throughout the last decades science communication became more oriented towards publics and society and has sought to engage more with audiences and understand their expectations, their values and their resentments and criticisms. This applies to most scientific topics, including climate change. Overall, there has been a realization – and many researchers and communicators live up to this realization – that engaging with your audiences is beneficial. 

Has your research found anything about the levels of exposure that scientists have to the public discourse as individuals and as a body? Do you think they’re more exposed than they have been to the effects of public discourse? 

I think so, and studies show this. In general, exposure has increased as science has become more embedded in society and more involved in political decision making. During the last decades, scientific expertise has increasingly pervaded many aspects of daily life and become more and more part of political debates. As a consequence, scientists became more exposed in public and media discourse. Newspapers introduced Science sections and science has been discussed even more intensively among the public as participatory forms of media came up, such as social media, where there were fewer gatekeepers that kept scientists from engaging with the public and vice versa. 

Beyond this long-term change, there are also ebbs and flows of media presence and public attention for certain events or phases – such as during the COVID pandemic. Scientists became important actors in societal discourse: They were more involved in policy decisions, for example in expert councils, and they also became increasingly important for journalists as sources of evidence.

And this had feedback effects on science and academia. So not only has society become more scientifically oriented, but also science has become more socially oriented. Certain academic journals, for example, now really look for what sells and what has impact. This shows that the mechanisms of journalism and media discourse have extended to inner scientific communication, such as journal publications. This can be a good thing, but it can also be challenging. 

This has also impacted funders, right? The constant pressure from funders on researchers to prove their engagement.

Exactly. Very often it’s a requirement in funding proposals to say: “We will be doing this kind of public engagement, this sort of science communication,” and so on. I don’t want to sound critical about this at all – I think that’s a very good and important development. But it is still debatable whether science communication and engagement should be obligatory in order to get research money. Because this really forces researchers to do science communication, regardless of whether they feel comfortable about that, have talent or are trained for that. And it may disadvantage those who do not have networks or infrastructures of professional science communicators. But again, this is not a black and white question, it is nuanced. 

Often, when we talk about this topic, we discuss climate change or COVID, situations where the tension or antagonism is directed from the public to the scientists, who are publishing important, critical information. But what about bad science? For example, the MMR vaccine scandal in the U.K. where the science was bad. Children were not being exposed to autism through these vaccines. But in a way that dynamic was flipped. Scientists were putting bad science out there and using the media very powerfully. And it was then up to science again to debunk one of their own.

I think this touches upon a very important ethical question of science communication and academia more generally, which is: Do we really want to have a science communication ecology in which the most flashy – but not necessarily the most reliable – argument gets the most citations, the most media attention, and the most consideration in policymaking? Or do we want to have a science and academic communication landscape where the best, most accurate piece of scientific evidence gets incorporated in scientific debate, policymaking and journalistic coverage, even though it may not be the loudest voice? There may be great evidence from excellent studies – but we don’t know about it, for example, because the authors of these studies are not on Twitter, do not do exciting interviews or are not flashy media characters.
[note from editor: this interview took place just before Twitter became X and died]. 

Or what if the authors of the best scientific paper ever do not speak English or do not have funding to publish their research open access? If it is closed behind a paywall, their work may not that easily get into policymaking or journalistic media. 

Have the high-stakes public debates around topics like the COVID pandemic or climate change taught scientists anything about communications? Or do you think that the way to deal with science communication is to employ science communicators to be the middleman? What would be the most effective way to ensure that there is useful discourse between publics and science?

Overall, I think that there should be professional individuals in between scientists and publics, such as science communication professionals at higher education institutions or science journalists, science bloggers, and “sciencefluencers” on social media. They can do a good job of mediating between science and publics. That being said, it is still important and worthwhile to have scientists and scholars themselves as vocal communicators in public discourse in society – for both high and low stake debates. 

But navigating high-stakes debates is often difficult, and the pandemic has taught us this once more: First, scientists must be aware that when they communicate about contentious issues, they will often be perceived as political actors. As such, they are prone to criticism, similar to politicians, by parts of the public and certain media. This can be challenging, which means that they need to be prepared for that and trained for it. Perhaps we also need to find ways to reduce their “target-worthiness” while they still remain vocal, for example by making processes that take place behind closed doors transparent, such as political advisory roles.

And even though it is difficult, I think it is important for scholars to separate their role as a scientist who offers scientific evidence and expertise from their role as a  citizen who is entitled to voice political opinions and endorse certain policies. Mingling the roles of scientists and citizens can be described as stealth democracy, or what Roger Pielke would call stealth issue advocacy, and the pandemic showed that this may be perceived as bad and problematic by parts of the public.

The second thing I observed during the pandemic, and which I think could be another key learning for similar high-stakes debates about climate change or potentially AI, is that there has been a lot of focus on a top-down approach to science communication – much like science communication in the previous decades, where you had scientists who were knowledge producers, put out their research and let the public deal with it. This approach may cause public criticism similar to that which I study in my research: Scientists who speak as people from the top of the ivory tower down to the public may be perceived as elitist and cause populist backlash. Moreover, this position may prevent scholars from getting a sense for public opinion dynamics and legitimate concerns. During the pandemic, I think the fact that this top-down approach might cause a backlash was ignored by some scholars and scientists. So the learning from this would be to focus more on engagement, dialogue, participation with publics, doing citizen science projects, doing workshops with members of the public, with opinion leaders, with different target groups and so on. But it is so easy for me to call for all of this, it’s really a big task, obviously, that requires funding and training, for example.

Absolutely. Any researcher who has tried to explain their work at the family Christmas dinner table knows that it is not easy, even if we don’t work with controversial topics.

Yes, engaging with your audience and being open to feedback is important. Many topics are difficult to communicate. They might be complex and abstract, and we sit at the Christmas table, explain them to our families and think “Well, this was hard, but I think I did a good job.” But we don’t know how much our uncle or grandmother actually got from that or how they understood it. They might have a very different conception from what I had in mind when I was talking about my research. 

And if our research is publicly funded, which means that ultimately our uncle, our grandmother’s taxes are funding our research, then there is even more of a responsibility to think about how we talk about these things.
That’s actually another reason for why scientists do have some sort of obligation or responsibility to engage with publics. To legitimize their role in society because many get funded by a taxpayer. 

What is the one problem in research that you think doesn’t get talked about?
A very important problem which still gets too little attention is inequality in academia, and also in science communication practice in specific. There is inequality at the global south/ global north level, for example with regard to financial resources. Not speaking English as your main language is another inequality cleavage. Being a woman can also be a barrier to success – there are many inequalities. Many researchers are well aware of them, but there is, in my view, still too little being done to address them. 

And this also applies to science communication practice. We have, for example, great handbooks on how to do good science communication. But most of them are in English or are edited by English-speaking editors. And while there are good examples of handbooks which have included various geographical perspectives, many best practice guides are tailored towards Western, educated, industrialized rich, democratic societies. 

Right!. It is up to researchers and communicators and funders to address this. 

Yes, and proactively so. Here we sit, as Western white people, speculating about disadvantages which I have not experienced anything like that myself, firsthand. It is really important to include minorities or other local communities in our research and outreach. So don’t just do literature searches on Google to find a paper about science communication in South Africa or the Middle East, but really work with people from there, involve them and empower them. I get that it is easy to say this. I am co-leading a global survey project on trust in science and science-related populism. We have collected survey data about these issues in 67 countries, and we could have done so mostly without any local help. But this would be very bad parachute science and prevent us from considering valuable knowledge about local cultures. So we made the effort to reach out to collaborators from almost every single country and involve them in our project. And this was time-consuming and sometimes frustrating. So, just saying “avoid parachute science and be inclusive” is easy. But to do this requires effort and commitment – which mostly does not get acknowledged or incentivized, or is not even visible to others. 

In a research environment where the drive is to publish quickly and get research out there is constant, inclusive practices slow processes down. But if that’s what it takes, then maybe that’s how it has to be?
Exactly. And it needs fundamental, organizational, institutionalized change, because the metrics, benchmarks and criteria that we have as key performance indicators in academia are usually publications and citations. But there’s not a metric, and maybe there should not be a metric, for doing things inclusively. 

Although perhaps the ultimate metric is that this approach will produce better research, which is more applicable and will have a longer shelf life?
Yes. And I don’t want to sound too pessimistic – because many colleagues and scientific journals are very aware of these problems and appreciate every effort that is taken to really increase diversity and be inclusive. But I think a lot still needs to be done.