In recent years, academic research has become a target of political hostility. Governments and partisan movements have delegitimized, defunded, or discredited entire scholarly fields, while those working on topics that intersect with politics, media, or technology have faced escalating scrutiny. This has been observed particularly in environments where research findings challenge powerful interests or dominant narratives.
In the United States, this tension has become especially acute: disinformation and social media research now sit at the center of a broader culture war over the meaning of “free speech,” the role of universities and the boundaries of government-funded inquiry. Under the second Trump administration, researchers in this realm have faced sweeping interference into their work and unprecedented censorship from the federal government. This has included abrupt cancellation of hundreds of National Science Foundation (NSF) grants targeting mis- and disinformation projects, to dramatic cuts to federal university funding, amounting to more than $10B—including hundreds of millions of dollars from Harvard and Columbia. Entire datasets, web content, and research portals have been purged under executive directives aimed at silencing topics around “DEI” (Diversity, Equity and Inclusion), gender and disinformation—amongst other subjects.
Meanwhile, prominent academic and cultural institutions, including the Smithsonian—the world’s largest government-sponsored museum, education, and research complex—have faced ideologically driven interference and efforts to suppress critical historical narratives. Research centers studying disinformation and social media at numerous prestigious academic institutions, from Stanford, to the University of Texas, have shut down—in many cases due to sustained legal and political pressure around their work. Foreign institutions have not been immune, with hundreds of academics receiving targeted surveys from the U.S. government inquiring about the topics of their research. These actions signal an unprecedented assault on research autonomy—creating an environment in which scholars are increasingly stifled.[1]
Measuring the chilling effect
Because the most visible forms of scholarly communication now occur on social media, the effects of political pressure are likely to manifest first and most clearly in these digital spaces.[2] Social media has long been used by researchers of all disciplines to communicate findings to public audiences and has “expanded the possibilities for citizens around the world to share knowledge and interact about scientific advancements.”[3] This use of social media is, arguably, necessary in today’s digital age for researchers and institutions to amplify their work and ensure it has an impact in their fields. The academic space has gone so far as to, for instance, develop novel metrics (e.g., AltMetric) for tracking the presence of one’s research across social media,[4] and specific to the broader field of disinformation, open-source intelligence (OSINT) and information warfare research, entire X (formerly Twitter) communities have emerged over the years, as central online gathering places for academic discussion and analysis.[5]
Therefore, I hypothesize that political pressure from the U.S. government has produced a measurable chilling effect on academic institutions working on disinformation-related studies, visible in their public posting activity. This analysis posits that the social media posting of research institutions can reflect their priorities, their most compelling findings and most impactful work. I argue that the degree of public-facing research communications from academic institutions about disinformation research can function as a proxy for measuring the health of the field. For instance, a downturn in posting after the 2024 U.S. election could be consistent with the claim that academic and civil society institutions began engaging in self-censorship or evasive behavior in response to the Trump administration, whose political movement has openly displayed its hostility to research on disinformation and media for years.
Given the importance of social media to this field and the increasing encroachment of the federal government on its activities, the prevalence of online research communications from academic institutions can function as a proxy for its health. In simple terms: a robust research environment should include a robust online discussion about this research. However, given the state of online harassment and attacks brought against disinformation researchers and their institutions by the administration and its supporters, they are self-censoring—with some even leaving the U.S. altogether—seeking to avoid the wrath of the administration.
Analyzing institutions’ social media activities
I assessed the online activity of Academic Journals and three groups of research institutions: Public U.S. Universities; Private U.S. Universities; and Non-U.S. Universities. Journals were selected on the basis of their notoriety and social media following (e.g., Nature, The Lancet, JAMA), while institutions were selected according to their global and regional rankings in well-known indexes (e.g., QS World University Rankings), as well as their work into politics, policy and media studies. Where possible, at least two X accounts for each institution, typically the institution’s main account and one or more department- or project-specific accounts, were included for analysis, and posts were restricted to English. Institutions whose sole mandate is the study of disinformation (e.g., the Stanford Internet Observatory) were excluded, as their communications remain wholly centered on the field. By contrast, institutions with broader portfolios may reveal clearer signs of a chilling effect, since reductions in disinformation-related research communications can be interpreted against the backdrop of other ongoing research activity.
The complete list of accounts analyzed can be found in Appendix I. Content posted between 27 June 2024 and 30 June 2025—a period of 12 months roughly centered around the 2024 U.S. election—containing the following keywords was pulled down from a commercial social listening tool and analyzed: “misinformation,” “disinformation,” “propaganda,” “fake news,” “content moderation,” “conspiracy,” “conspiracies” and “polarization.” Individual researchers’ accounts were excluded to protect their safety.
To assess whether posting activity underwent a structural break around the U.S. election, I employed a time-series regression design with a cutoff set on 5 November 2024. For each dataset, the outcome variable was the daily count of mentions. The model included three key predictors: a continuous time variable (capturing baseline trends), a post-election indicator (capturing any immediate level shift beginning on the cutoff date), and a time-after variable (capturing changes in the slope of the trend after the cutoff). Models were estimated using either Poisson regression for count data or Negative Binomial regression, with dispersion diagnostics confirming no substantial overdispersion in most cases. In simpler terms, these models are designed to handle daily post counts, and the results indicate that the data fit these statistical assumptions well, meaning the patterns observed are unlikely to be artifacts of model error or excessive variability. Furthermore, a time-series regression design allows the analysis to test whether posting behavior changed significantly after a specific event—in this case, the 2024 U.S. election—while accounting for natural day-to-day fluctuations over time. In practical terms, it isolates the timing and magnitude of any structural break in activity, making it possible to determine whether observed shifts represent random variation or a systematic change associated with the election period.
A total of 507 posts were analyzed: 172 from Academic Journals, 63 from Public U.S. Universities, 122 from Private U.S. Universities and 150 from Non-U.S. Universities. An examination of the daily volume of content posted by each kind of institution can be seen below in Figure 1.

Figure 1: Daily volume of posts amplifying disinformation research from academic journals, public and private U.S. universities and non-U.S. universities.
How did institutions’ communications change after the election?
Across all four datasets, the regressions point to a clear structural break in activity following the 2024 U.S. election. Firstly, in all cases, a T-test indicated statistically significant differences in posting volume before and after the election. In terms of the regression, posting rates fell to between 17–23% of their pre-election levels immediately after 5 November 2024 in the case of Public and Private U.S. Universities. For these institutions, the dominant effect was a large and immediate one-time drop, followed by a slow, but persistent decline in posting volume in the months after the election. For Academic Journals and Non-U.S. Universities, the post-election decline was much smaller and statistically less significant than that of Public and Private Universities, but was also followed by a continued downward slope, indicating a delayed, slight erosion of activity over time.
Taken together, the regression results suggest that while all institutions curtailed their communications about disinformation-related research after the election at least a little, for U.S. universities, the change was abrupt. Posts referencing government-funded research, platform accountability and electoral manipulation dropped sharply within days of the election, suggesting a rapid adjustment to a newly hostile political environment. By contrast, academic journals and non-U.S. institutions exhibited a slower, diffuse contraction, likely reflecting reduced submissions, delayed publication pipelines and the chilling effect of the American debate on their U.S.-based collaborators and funders. Rather than a uniform retreat, the data therefore point to a selective reorientation of communication—away from politically sensitive topics like election interference and toward safer or more technical domains—consistent with a climate of heightened caution rather than disengagement from the field altogether. Regression results can be seen in Table 1, below.

The findings highlight how rapidly political shocks can alter the communication practices of academic institutions. The immediate and pronounced decline among U.S. public and private universities illustrates a direct responsiveness to perceived political threats, echoing prior research on academic self-censorship under authoritarian pressure.[6] While universities in the U.S. operate in a formally democratic environment, the 2024 election demonstrates how quickly political rhetoric and threats to funding can narrow the boundaries of permissible research communication. Notably, public universities—institutions most directly reliant on federal funding and therefore most vulnerable to shifts in U.S. government priorities—exhibited the sharpest and most immediate declines in posting, underscoring how financial dependency amplifies susceptibility to political pressure. This aligns with studies showing that democratic backsliding often begins with more subtle incentives that encourage institutions to constrain themselves, rather than overt suppression.[7]
Moreover, to demonstrate that the patterns seen around disinformation research are indeed anomalous and distinct, I further examined how the same entities in each category discussed non-disinformation-related research. Pulling posts containing hard science and public health related keywords, including “cancer,” “immune system,” “tumor,” “metabolism” and “diabetes,” and running the same series of regressions centered around the 2024 U.S. election revealed strikingly different results, seen below in Table 2.

Across the hard science related datasets, regressions showed no structural break in activity following the 2024 U.S. election. Posting rates remained stable or even increased slightly in the months after, with post-election coefficients clustering near 1.0 and generally lacking statistical significance. For U.S. universities, mean posting volumes nearly doubled after the election, while non-U.S. institutions displayed a short-term decrease in posting—which suggested momentary volatility rather than a sustained shift. Academic journals likewise showed virtually no change in trend; a T-test confirmed that differences in posting volume before and after the election were not statistically significant, demonstrating, once again, a momentary fluctuation in volume captured in the time-series regression.
In contrast to the sharp contraction observed in the disinformation-related datasets, hard science posts continued largely uninterrupted, confirming that the earlier declines were field-specific rather than systemic. The disinformation-focused regressions in Table 1 showed significant drops—particularly among U.S. universities—suggesting that the downturn in activity was selective and potentially even politically conditioned, not the result of a general reduction in institutional engagement online.
The slower, slight declines seen among disinformation related research posts from journals and non-U.S. universities are also instructive. While journals are not directly dependent on U.S. federal funding, they operate within a transnational ecosystem in which U.S. political hostility can influence editorial priorities, peer review, and reputational risk and even the number of manuscripts submitted. Non-U.S. universities, similarly embedded in U.S.-linked funding and collaboration networks, also appear not to have been fully insulated from these pressures. These patterns suggest that the chilling effect of U.S. government actions reverberated internationally.
At a theoretical level, this study demonstrates how social media activity can function as a proxy for the health of a research field.[8] Declining volumes of research communication may not reflect transient attention cycles, but rather deliberate institutional caution in politically sensitive domains. This framework could be applied to other areas where government hostility creates incentives for silence, such as climate science or gender studies. Ultimately, this research in progress highlights the fragility of both the information environment and academic freedom itself: for research areas like social media and disinformation—already entwined with questions of power, legitimacy, and security—chilled communication leaves societies less informed precisely when transparency and expertise are needed most.
Where do scholars turn from here?
This study provides quantitative evidence of a chilling effect on academic communication about social media and disinformation in the wake of the 2024 U.S. election. Social media activity covering research in these areas fell sharply among U.S. universities immediately after the election, consistent with an environment of heightened scrutiny and fear of political retaliation from an administration openly hostile to this research field. Academic journals and non-U.S. universities displayed slower and smaller—but still-measurable—declines, suggesting that the chilling effect extended beyond the institutions most directly dependent on U.S. federal funding. Most importantly: the data shows that this effect was not systematic across disciplines, with some hard science communications remaining consistent and unaffected by the election.
The U.S. case demonstrates how rapidly political hostility can distort the production and circulation of academic knowledge, even without formal censorship. As universities and journals withdraw from online engagement, the public sphere loses access to independent expertise on issues—such as foreign influence, algorithmic manipulation, and conspiracy networks—that are most vital in periods of democratic stress. In this sense, the administration’s assault on disinformation research represents more than a policy change; it constitutes a structural disruption in the international flow of scientific communication.
Appendix can be found here: https://doi.org/10.5281/zenodo.17571626




0 Comments