What is disinformation and how does the spread of disinformation affect liberal democracies worldwide?
I notice, the longer I work on this topic, that “disinformation” is really an umbrella term—in the sense that it covers many different phenomena, each of which may have its own logic, but they are all grouped under the same label. I find this relevant because I’m particularly interested in a type of disinformation that doesn’t appear in most definitions. Take, for example, Michael Hameleer’s definition, which describes disinformation as a form of manipulative communication whose underlying intention is often concealed. There’s an assumption behind this view that someone is strategically spreading falsehoods in such a way that the audience believes they are receiving accurate information.
This conventional understanding of disinformation is often tied to platforms and algorithms. The assumption is that the attention economy and its algorithms help amplify disinformation. But this definition seems to overlook the kind of disinformation we now frequently see—namely that of political elites. With populist politicians you get the impression they simply don’t care about the difference between true and false—at least in the realm of political communication. In other areas, the accuracy of information still matters, say if it will rain tomorrow or if the flight is on time but politically, accuracy and reliability of information seems to no longer matter. Disinformation is spread with the expectation that people will support these claims, regardless of whether they believe them or not. It becomes a tool of power: populist politicians act as though they have the authority to define reality.
This kind of disinformation is not hidden, and platforms aren’t the primary vectors. It’s typically propagated by media outlets like Fox News or other right-wing channels, which bring it to the forefront, only to be taken up and debated by other media. Platforms like X (formerly Twitter) don’t necessarily offer the reach needed for such visibility, it is the interaction with fringe media and legacy mass media that amplify these messages to the public.
What has changed in the era of social media?
To be honest, this dynamic has always existed to some extent. What made it explode as a topic was 2016—with Brexit and the US election. After these unexpected results, many observers sought explanations and blamed the outcome of these votes on platforms, arguing they had spread too much disinformation—and that voters, like “gullible sheep”, fell for it and made the wrong choices.
That’s how the narrative unfolded. Since then, research and publications on disinformation have increased dramatically. A new research field developed that draws direct connections between disinformation and platforms. That’s one level of the discourse. Then there’s also a methodological dimension: when it comes to tracing disinformation, platforms are much easier to study than traditional media ecosystems, where drawing connections is much more complex.
If you ask me what empirical role platforms play in spreading disinformation, I’d say they serve as a prelude to populist dynamics. A huge amount of content circulates online, and existing social networks help rumors and bizarre narratives to spread. So yes, platforms enable a diversity of circulation paths for information. But we can’t say disinformation wouldn’t exist or would be less visible without platforms. It just would spread differently.
Given that platforms depend on who uses them, do you think scientists and science communicators can help counter the spread of disinformation by engaging differently on social media? Or are these platforms ultimately unsuitable for sharing knowledge?
No, I wouldn’t at all say they are unsuitable. Platforms are accommodating so many different uses and opinions. This is why many people, especially communication scholars, talk today about a “fragmentation of the public sphere.” At the very least, we can observe a kind of differentiation—we now have various coexisting cultural and thematic public spheres. I believe platforms and messaging services are indispensable for science and science communication. Depending on your field, they can be the fastest and most efficient means of communication with colleagues. For many disciplines, they’re also an irreplaceable source of information—providing insights that you wouldn’t get otherwise. Especially platforms that allow for curating your own feeds and choose which people to follow, create an immense value for its members.
At the same time, we’re seeing a withdrawal of many scientific organizations and individual researchers from X (formerly Twitter). There is or used to be the widespread belief that democracy functions best when we all read, hear, and discuss roughly the same information. But we also have to recognize that this kind of homogeneous public sphere only existed for a relatively short time—essentially from the postwar period to the late 1990s. Back then, we had dominant mass media that told relatively unified stories about the world. But if you traveled to another country, you’d immediately notice that entirely different topics and regions dominated the local media coverage. For instance, while Africa might barely appear in German media, it is be much more prominent in the UK or France.
This example shows how strong the agenda setting role of legacy media once was—a degree of selectivity that we no longer experience in the same way today. And that’s not necessarily bad; it simply defines a different media era. The real challenge now is: how do we create shared political understanding when everyone sees a different version of reality on their screens? That’s the key democratic question in the age of digital platforms.
What does effective communication look like in today’s hybrid media environment?
Many people are now working on new participatory formats, recognizing that the model of democracy from the past 50 or 60 years may no longer be sufficient. One such idea is citizens’ assemblies, where randomly selected participants deliberate on complex policy issues. That’s one way to create public discourse—but these formats don’t scale easily. And they might not work for every issue, because solving complex problems requires expertise. Random selection alone doesn’t always ensure that. What we do see, though, is that when something truly dramatic happens—like the pandemic—we all talk about the same thing again. In moments like that, fragmentation temporarily reduces, and there’s a shared focus. But you can’t rely on crises to unify public debate. Perhaps media formats specialised on aggregating information may help us in the future to create a shared public space?
And what role does science play in this context? Is it still the “force of the better argument,” or has that faltered?
That was always more of an ideal than a reality. From an academic standpoint, of course it’s attractive to see one’s own expertise as contributing to better discussions—and ideally to better political decisions. But today, that doesn’t seem to be really the case. Quite the opposite: Populist policies seem to have the upper hand. There is more expertise than ever—many people are well-educated and research topics in great detail. Yet when we look at political decisions or statements, they often seem decoupled from empirical evidence. You wonder: “He must know better, or she certainly does—but why say that?” And people who work on expert commissions often leave frustrated. The effort behind gathering and synthesizing expert knowledge rarely seems to translate into meaningful policy impact. It’s a recurring source of frustration.
Populist politics defines itself as opposition to elites—including knowledge elites. Think of the current health policy in the US, which openly defies established medical standards and remedies. For those aligned with populist thinking, rejecting expert knowledge may feel empowering.
That doesn’t mean researchers should withdraw. On the contrary, they must actively defend their role. Research can no longer pretend to be apolitical—scientists are increasingly drawn into debates, and they need to understand and accept the political consequences of their work.
As a final thought: how should regulation address the use and communication on digital platforms? What challenges and prospects do you see, especially when it comes to protecting freedom of expression?
That’s a very complex question. The European Digital Services Act mentions disinformation in passing, but never really defines it. In the actual legal text, the focus is only on illegal content, and it’s left up to each member state to determine what qualifies as unlawful. That’s been one of the major critiques: member states now have broad power to define content-based offenses, which can be problematic for freedom of speech.
Platforms themselves don’t assess disinformation based on meaning. Instead, they use pattern-recognition tools, calling it “coordinated inauthentic behavior.” They ask: Who sends which links to whom? How often? Which words pop up? This is because it’s very difficult—perhaps impossible—to systematically define and detect disinformation via semantic criteria. What’s also often missing from fact-checking debates is the recognition that disinformation isn’t always factually false. It’s often presented as narratives—blending elements of truth with highly misleading framings or emotionally charged undertones that stir resentment. Fact-checking can only address a small portion of that.
That’s why it’s so hard to pin it down. I like to refer to the American political theorist Lisa Disch, who argues that disinformation becomes dangerous only when there’s no longer a functioning public sphere capable of questioning or correcting it. Her example is the Iraq War—where decisions were based on intelligence reports no one could verify. According to Disch, disinformation is especially dangerous when there’s no opportunity for public deliberation about the quality of a statement. And I agree: That’s when it becomes a threat to democracy.
My suggestion is: Let’s clearly define and remove illegal content without weakening freedom of expression—that’s straightforward. But beyond that, combating disinformation is more about enabling and safeguarding pluralistic public discourse. A central question here is how to protect quality journalism, especially since platforms dominate the advertising market and the younger generation no longer subscribe to newspapers any longer. We’re likely facing the last generation that funds journalism via subscriptions.
So, we need other funding models. In that context, I see the fight against disinformation—not only about takedowns, but as a larger task of rethinking the future of quality reporting, political education, and critical media literacy. Studies strongly suggest that good journalism promotes a better quality of public discourse. This also includes science communication.




0 Comments