Bold ideas and critical thoughts on science.

Sebastian Koth

There is a Model for That: Science and Public AI Infrastructures

Sebastian Koth highlights the importance of scientific institutions for the establishing of public AI infrastructures.
26 February 2025

This article argues that the culture and institutions of science are a central pillar for developing and institutionalizing public AI infrastructures. By fostering an environment where AI technology is permanently accessible and accountable, science organizations navigate the complexities of integrating this technology into specific social contexts, thus serving as a blueprint for other institutions and organizations to pursue trustworthy and responsibly-made technology solutions for problems that matter most, not for profit and power only.

Let’s Date Happy: Science and AI Technology

From the perspective of science, the current developments in generative AI are ambivalent. On the one hand, this technology brings about significant positive changes. It enables new research methodologies such as data visualization, information extraction, and inferencing. For instance, in fields like biomedical research and materials science, it facilitates the analysis of vast unstructured datasets, driving remarkable discoveries (Nguyen et al. 2024); in the social sciences and economics it has brought about distinct forms of inquiry, new research agendas, and theoretical developments (Manning et al. 2024; Bail 2024). In academic education and teaching, it enables new forms of personalized learning, enhances collaborative environments, promotes interdisciplinarity, and facilitates innovative teaching models. In the day-to-day work of scientists, it not only assists writing better texts (Herbold et al. 2023) but also supports note-taking, coding, search, science communication, administrative work, and project planning.

On the other hand, these new possibilities can only be seen as positive with strong reservations. When applied in normatively questionable or poorly designed ways, generative AI technology can undermine academic work and conflict with core principles of scientific practice, such as reproducibility, explainability, and accountability (Messeri and Crockett 2024). However, these are issues that point to the limitations of using the technology for the purpose of science and research; ultimately, they need to be addressed by the scientists themselves, their communities and the organizations in which they work (Fecher et al. 2023). Another, arguably more pressing issue with AI technology and science is that it is increasingly being shaped and controlled by private companies. Over the past decade, essential resources for research on AI technology – such as computing power, data, and expertise – have dramatically shifted from public research institutions to the private sector (Whittaker 2021; Ahmed and Wahed 2020; Ahmed et al. 2023). But that shift does not only affect this particular research field. As the use of AI technology becomes increasingly pervasive across scientific fields and academia more broadly, this trend raises serious concerns for the future of science. From the perspective of scientific research, the disproportionate degree to which AI technology is developed behind corporate closed doors results in less reliable systems, as private companies prioritize market interests and often avoid transparency. This lack of openness is fundamentally at odds with scientific methodologies, where transparency and accountability are essential. Without access to original data, training processes, or the AI models themselves, researchers cannot verify or reproduce results generated by commercial AI systems, in turn compromising scientific rigor. In academic work and teaching, the gap between privately controlled AI technology and core scientific principles is equally troubling. Big Tech, Ed Tech, and academic publishers work tirelessly to integrate their AI applications into research and educational institutions, using the technology to interlock science into their business model (Williamson 2024; Komljenovic et al. 2024). Exploiting their powerful positions, they develop AI systems based on data that researchers and students generate, store, and publish on their platforms, without granting them a voice or a share in the results (Komljenovic and Williamson 2024). This is particularly alarming as these systems become increasingly vital for research and education, supporting discovery, interpretation, writing, editing, reviewing, workflow management, and learning analytics (Bergstrom and Ruediger 2024; Gibney 2024). Moreover, large private AI platform operators leverage their vast resources to enter sensitive research fields like health research (Marelli et al. 2021), exploiting regulatory loopholes and blurring the lines between science and commercial R&D (Colonna 2023), transforming these fields into business areas often without adhering to the standards and ethical frameworks underlying scientific research (Koebler 2024).

Bluntly put, the current institutional setup of generative AI technology contradicts the norms and procedures of science. Arguably, drawing scientists into corporate AI systems represents yet another wave of what critical scholars refer to as a platform strategy that turns scientific functions over to private firms whose longevity, openness, and corporate goals remain uncertain (Plantin et al. 2018; Mirowski 2018). This development not only drives commercialization, fosters dependency on private services, and is likely to lead to bad science and technology; it ultimately risks undermining science’s autonomy and its commitment to addressing self-defined problems in its own terms.

However, in academia, this tendency is not only met with concern and frustration. The “ChatGPT moment” in 2022 has sparked a new spirit and with it a Cambrian explosion of initiatives and efforts to organize and build alternatives to corporate AI infrastructures. I believe that this is a clarion call, which has the potential to create momentum beyond academia, extending to the general public, media, public administration, local small-scale business, and international organizations. Therefore, I want to argue that the culture and institution of science can serve as a blueprint for developing and institutionalizing public AI infrastructures that make AI technology permanently accessible and accountable. In the following, I will outline why this is desirable and how science organizations establish such infrastructures by mobilizing resources and capacities, educating on AI use and its implications, ensuring accessible, user-friendly, high-quality AI tools, and developing AI systems tailored to specific needs.

Between a Rock and a Hard Place: Corporate AI and AI Nationalism

The rise of generative AI technology has brought about problems such as high demands for energy and material resources (Strubell et al. 2019; Schwartz et al. 2019); the undignified exploitation of human labour for data collection, cleaning, annotation, and validation (Gray and Suri 2019; Williams et al. 2022; Miceli and Posada 2022); multiple biases built in and reproduced by AI technology (Lucy and Bamman, 2021); and general misuse, for instance, the spread of misinformation and cybercrime (Shoaib et al. 2023; Chen and Shu 2024). On that account, we should not be misled by claims made by private companies that they have the ability to deliver technically proficient systems, and rather ask how technological progress can translate into real-world usefulness and improvements in people’s lives. We must recognize that the concentrated organization of AI within the private sector is not necessary for the technology to flourish and benefit society—in fact, it may create dependencies that hinder genuine innovation.

The most significant dependency becomes apparent if we take a political economy perspective on AI technology. Critical scholars who examine the relationship between economic conditions and power in AI (e.g., Srnicek 2018; Whittaker 2021; Crawford 2021; Vlist et al. 2024) argue that market concentration and monopolization are driven by the specific interplay between AI software and AI hardware. On the software-side, AI models represent a new software component with broad application potential. However, this software component requires not only vast amounts of data but also considerable computing power, special AI chips, and specific computing architectures. Hardware thus plays a decisive role in shaping what kinds of AI technology are being developed and what can be done with them (Hwang 2018; Hook 2020; Luitse and Denkena 2021; Sastry et al. 2024). This technical dependency gives rise to a political-economic dependency: the deployment of AI technology increasingly relies on global AI corporations, their cloud computing infrastructures including network cables, satellites, and data centers, as well as their control and investments in all parts of the AI value chain (Luitse 2024). Consequently, technology conglomerates such as Amazon, Microsoft, Alphabet, Alibaba, and Tencent have developed comprehensive AI stacks and ecosystems to deliver “AI-as-a-service,” integrating infrastructure, models, and applications within their cloud services to offer tailored solutions and marketplaces aimed at attracting third-party developers and businesses (Vlist et al. 2024). The growing dependence on Big AI has sparked numerous efforts to curb its dominance and develop alternatives, leading to, for instance, the proliferation of open-source AI frameworks, collaboratively managed open datasets, distributed AI computing platforms, and vibrant online communities striving to develop more transparent, accessible, and accountable AI systems. While many of these initiatives acknowledge the private sector’s crucial role in driving AI development and diffusion, they in particular emphasize that much of the progress in AI technology relies on commons as a central aspect of the AI innovation ecosystem (Schrepel and Potts 2024).

The dependence on Big AI is particularly troubling given the delicate socio-technical requirements of AI technology. At the core of these requirements are AI models, a novel class of algorithms distinguished by their portability and adaptability. Portability allows AI models to be transferred, as well as split and deployed across systems and environments, without losing functionality, thereby facilitating their application in diverse software and hardware setups. Adaptiveness means an AI model can be applied in very different contexts and adjust to changing conditions. Once instructed about its environment and tasks, it aligns its performance; moreover, it is able to learn and improve its performance. These features make AI models highly flexible and broadly applicable tools. However, to play out their dexterity they need to be trained appropriately and well-integrated into their specific social-technical context, as well as being constantly maintained and responsibly managed. This requires not only technical expertise but also human labor. The work within the AI production process—often geographically dispersed and carried out under precarious conditions—includes tasks such as data collection, curation, annotation, as well as model training, evaluation, and verification (Muldoon et al. 2024). Moreover, inputs, reactions, and feedback from users are oftentimes captured in order to improve AI applications and models. This essential reliance of AI systems on practical human knowledge and work becomes even more apparent when acknowledging that the contexts in which AI is applied are themselves dynamic. Contextual shifts lead to so-called “AI aging” (Vela et al. 2022), according to which AI models degrade over time if they are not synchronized with their environments or if the input data they receive differs from the data used for training (data drift), resulting in errors and useless applications. Like physical tools, AI models wear down over time, following a lifecycle that, at its end, can render them inaccurate, ineffective, and even dangerous. Moreover, AI systems are subject to the “alignment problem” and may fail to carry out the tasks as originally intended by their developers, necessitating continuous monitoring and adjustments regarding their robustness, interpretability, controllability, and ethicality (Ji et al. 2023). Finally, it is the delicate relationship between AI systems and human persons that highlight the need for responsibly managing these systems. While many AI models are trained on publicly available datasets, the most valuable AI applications require sensitive human data, which are subject to data protection laws. In addition to the need for protecting training data drawn from humans, protecting trained AI models is equally important, because circulating and repurposed AI models pose significant risks, highlighting the necessity for sophisticated AI regulation and innovative governance, for instance, on model-sharing platforms (Mühlhoff and Ruschemeier 2024; Gorwa and Veale 2024).

But if the need for public AI infrastructures is so evident, shouldn’t we already see their development underway? The answer is complex. The concentration of critical AI resources in the private sector and thus the ability to plan, develop, and provide proficient AI systems (Luitse 2024; Głowicka and Málek 2024; Korinek and Vipra 2024) gave rise to a growing number of national programs, aiming at reorganizing the political economy of AI and shape the role of this technology in work, industry, and the public sector at the local level with greater self-determination. For example, in 2019, Sweden launched “AI Sweden”, not only to develop and maintain Swedish language models to protect local language and culture in alignment with technological progress but also to develop knowledge, skills, and talent, build networks among small- and medium-sized businesses, organize access to high quality data, provide computing power, and ultimately create and implement solutions for domain-specific issues in areas such as healthcare, education, finance, and public administration. Other programs and initiatives are, for instance, the “Netherlands AI Coalition”, the EU’s “AI Factories”, “AI Singapore,” the “The Continental AI Strategy” for Africa, or the “Brazilian Artificial Intelligence Plan”.

The national AI programs currently being rolled out to counter concentrated corporate AI make scientific expertise indispensable. However, is science as a specific culture and institution of knowledge and technology also indispensable?  I argue that it is, especially if AI technology is to genuinely help society flourish and progress. Although these programs usually herald slogans like “AI for All”, it is unclear whether they also aim to establish public AI infrastructures and thus render AI technology permanently accessible and accountable, or if they are instead narrowed to economic goals only and focus on accelerating the local adoption of AI technology to avoid falling behind in the global competition for talent and business, securing a position in Sino or American AI value chains (Sturgeon 2021; Butollo et al. 2022; Lehdonvirta et al. 2023). In fact, such programs often emerge under the call for sovereignty, aiming to seize political and economic power in face of the power of others. What this means for people on the ground and the public embedding of AI technology remains vague at best. In such contexts, sovereignty is often assumed to be an end in itself, with its realization in terms of liberal, social, democratic, and emancipatory values left undefined (Pohle and Thiele 2020). Indeed, these state-led AI programs are criticized by skeptical voices as “AI nationalism” (Hogarth 2018; Spence 2019; Satariano and Mozur 2024). Rather than working collaboratively to develop AI, countries are adopting AI industrial policies driven by economic and national security imperatives that aim to control and limit access to critical components of the AI supply chain. This approach may not only lead to an accelerated arms race between countries and geopolitical blocs as well as increased protectionist state actions to support national champions; it is also likely to ultimately backfire, entailing problematic economic and political consequences in the long run (Aaronson 2024). Precisely because public research and educational institutions are so heavily involved in national AI programs, we must ask what role science plays in this context and which role we think it should play. When we consider Merton’s seminal “Note on Science and Technology in a Democratic Order” (1942), which outlines the ethos of modern science, striking contrasts emerge not only regarding science operating on corporate AI platforms but also with the emerging trend of AI nationalism. The institutional imperatives of science he describes, namely communism, universalism, disinterestedness, and organized skepticism, emphasize global cooperation and the need to treat AI technology as a global public good, rather than a national asset to be exploited and defended against others.

Science for Public AI Infrastructures

The terminology surrounding AI policy can be overwhelming. While AI must be safe, trustworthy, open, responsible, and democratic, the term “public AI infrastructures” is equally essential. Just as public infrastructures like healthcare systems, libraries, and energy grids have historically driven positive societal change, so too is this true for generative AI technology. Public AI infrastructures refer to resources that are permanently accessible and accountable. “Accessible” means that AI resources and services are low-cost or free and also useable without specialized technical skills; “accountable” indicates that the development and provision of resources and services prioritize the public interest and ensure responsiveness, institutionalized through mechanisms such as participatory governance, continuous feedback loops, and co-creation models; “permanent” signifies a commitment to long-term independence, guaranteeing that AI resources and systems evolve, adapt, and remain available, accountable, and sustainable, while preserving their foundational public purpose (Whitepaper Public AI 2024). I argue that science can serve as a blueprint for public AI infrastructures and trustworthy and responsibly-made AI technology, focussing on solutions for problems that matter, not for profit and power only. To support this argument, I will highlight how science organizations aim to tailor AI technology to their cultural standards by setting up technical AI resources in distinct ways, providing education on AI technology use, and offering customized, low-barrier, high-quality access to AI tools.

First, science demands its own AI resources and actively works to organize these resources according to its principles of openness and accountability. In the recent decade, the shift of critical AI resources from public science organizations to the private sector (Ahmed and Wahed 2020; Besiroglu et al. 2024; Nix et al. 2024) led to a narrowing of research agendas that typically prioritize data-intensive, computationally heavy deep learning methods at the expense of AI approaches focused on reducing environmental costs and creating more robust and fair technology (Klinger et al. 2020). Moreover, the limited access to resources such as computing power systematically sidelines academic researchers, leading to “de-democratized” AI research (Ahmed and Wahed 2020). This development is incompatible with liberal, social, democratic, and emancipatory ideals that drive science, prompting initiatives to strengthen AI research capacities at public science organizations (Castle et al. 2024). Launched in 2023, the Swiss AI Initiative projects significant investments in computational resources, bringing together dozens of academic institutions across Switzerland to advance foundational AI research on large models, LLM security and privacy, and human-AI alignment, as well as to develop domain-specific foundational models for application in areas such as science, education, health, robotics, and sustainability. Other examples are the large multimodal AuroraGPT model, developed with scientific standards by the U.S. Department of Energy for scientific research; the UK’s AI Research Resource initiative which aims to improve access to computing power for academic institutions, and the National Artificial Intelligence Research Resource Initiative by the U.S. National Science Foundation, which aims to address researchers’ needs by increasing access to a diverse array of AI resources, including computational capabilities, AI-ready datasets, and pre-trained models.

Such initiatives hold the potential to reduce the dependency of academic AI research on corporate infrastructure and drive valuable innovations, as demonstrated by the BLOOM project. In response to the fact that academia, nonprofits and smaller companies’ research labs find it difficult to create, study, or even use LLMs because of industrial labs denying access, the project brought together over 1000 researchers from over 70 countries and more than 250 institutions to collaboratively build the BLOOM AI model. The training of the model was funded by the French public research organizations CNRS and GENCI (Scao et al. 2023). The model is publicly available on model-sharing platforms; it can be reused under the “Responsible AI License” that was developed during the project as well. The model was released in mid-2022 and is, in terms of standard benchmarks, as powerful as OpenAI’s GPT-3. Unlike the proprietary GPT-3 model, which was optimized for the English language and designed for commercial use, but made no details about its internal workings and training data publicly available, the BLOOM model was trained on diverse datasets to support multiple languages, is intended for a variety of purposes, and was released as open-source, making it a powerful tool accessible to the global research community and beyond. The BLOOM project exemplifies the principles of the open science culture, aiming not only to build a capable AI model but also to promote accessibility and strengthen collaboration among researchers and institutions. It achieved this by being fully transparent in its training methods and datasets, making it suitable for public reuse and development. By emphasizing diversity and inclusion and incorporating various perspectives into the development process, the project represents a socially responsible contribution. Open science culture thus serves as a compelling model for aligning AI technology with society’s specific needs and establishing sustainable development and usage practices—a model increasingly embraced even by for-profit companies, such as Cohere with its Aya model (Schrepel and Potts 2024).

Second, science organizations place a strong emphasis on comprehensive, self-made, and up-to-date education about AI technology. At universities, lecture series, online courses, video tutorials, cross-departmental training, and hands-on workshops on the use and implications of AI technology have become a core part of studying, teaching, research, and administration. Students, staff, and faculty create these educational resources themselves, sharing knowledge and skills needed to responsibly and innovatively develop and deploy this technology. You can learn, for instance, to download open-source AI models, take them offline, fine-tune at will, and use them on-device in secure environments. Learning to use AI technology appropriately in scientific contexts involves not only being prompted to understand your specific task or problem in relation to the capabilities of the tool you are building but acquiring the ability to critically assess its implications and raise awareness of the biases and risks involved. Besides providing instructions on effectively leveraging AI technology, educational resources also include curated and openly accessible materials such as prompt archives and repositories for datasets and code. These resources must comply with ethical guidelines and data protection and privacy rules and are designed to encourage their recreation and expansion, fostering an environment that supports the development and application of AI technology aligned with principles of trust, inclusivity, and sustainability. Promoting purposeful and accountable use, such resources nurture literacy and a culture capable of addressing the complexities of a society increasingly reliant on this technology. While diversity and inclusiveness are crucial for countering bias in AI models (West et al. 2019; Kuhlman et al. 2020), they are often lacking in many AI companies. As demonstrated by science organizations, alternative approaches that are better suited to meeting the actual requirements of shaping this technology are already in place.

Third, science organizations provide customized, low-barrier, high-quality access to tools for their members who want to apply AI technology to their specific needs with ease. An increasing number of universities, often collaborating with one another, are offering specialized portal platforms to support students and staff in their daily activities. These platforms grant access to various AI tools developed and hosted by external providers via API with the advantage of protecting users and promoting equity. They not only enable access to sophisticated applications but also ensure privacy by anonymizing user data and avoiding the storage of personal login information or metadata, ensuring compliance with data protection regulations. Moreover, within universities skills and competencies are often unevenly distributed, and AI technology is likely to exacerbate these inequalities. Portal platforms help mitigate this tendency by providing equal access to advanced AI tools for all, regardless of technical expertise or available resources, facilitating educational equity and equal opportunities. However, these portal solutions often rely on commercial AI applications and thus can quickly become very expensive. This incentivizes universities to turn to open-source models and set up their own AI systems. Customized for academic use, these systems can effectively support research, teaching, learning, and administrative processes. Operated on shared high-performance computing clusters of universities and public research organizations, they grant the academic community access to AI applications while also maintaining control over computational resources and data management. Aligning with the principle that “small is beautiful” (Rehak 2024), custom-built AI systems reduce costs, deliver effective performance, and are also environmentally efficient.

To summarize, amid the competing economic and political interests in AI technology, science is an excellent starting point for understanding the challenges and opportunities of shaping AI technology under the rubric of public infrastructures. Building on scientific norms, standards, and culture, science organizations demonstrate how to create public AI infrastructures that are permanently accessible, accountable, and focused on collaboration, the common good, and societal progress. Showcasing better ways to navigate the complexities of integrating AI into specific social contexts, this can serve as a blueprint for other institutions and organizations. Arranging their own resources and capacities, educating about AI use and its implications, ensuring accessible, user-friendly, high-quality AI tools, and developing tailored AI systems for their specific needs, they can contribute to the much-needed emergence of public AI infrastructures, shaping AI technology and its integration into society in a more economically, politically, and environmentally sustainable manner.

Author info

Sebastian is a doctoral researcher at the Weizenbaum Institute. He studied physics, philosophy, and sociology in Leipzig and Berlin and worked as a project scout at the Technical University of Dresden. In his research he investigates the relation between digital infrastructures and scientific practice.

Digital Object Identifier (DOI)

https://doi.org/10.5281/zenodo.14930893

Cite as

Koth, S. (2025). There is a Model for That: Science and Public AI Infrastructures. Elephant in the Lab. DOI: https://doi.org/10.5281/zenodo.14930893

References

Collapse references

Aaronson, S. A. (2024). The Age of AI Nationalism and Its Effects.

Abbott, A., & Schrepel, T. (Hrsg.). (2024). Artificial Intelligence and Competition Policy. Concurrences.

Ahmed, N., & Wahed, M. (2020). The De-democratization of AI: Deep Learning and the Compute Divide in Artificial Intelligence Research (No. arXiv:2010.15581). arXiv. https://doi.org/10.48550/arXiv.2010.15581

Bail, C. A. (2024). Can Generative AI improve social science? Proceedings of the National Academy of Sciences, 121(21), e2314021121. https://doi.org/10.1073/pnas.2314021121

Bergstrom, T., & Ruediger, D. (2024). A Third Transformation? Generative AI and Scholarly Publishing. https://sr.ithaka.org/publications/a-third-transformation/

Besiroglu, T., Bergerson, S. A., Michael, A., Heim, L., Luo, X., & Thompson, N. (2024). The Compute Divide in Machine Learning: A Threat to Academic Contribution and Scrutiny? (No. arXiv:2401.02452). arXiv. https://doi.org/10.48550/arXiv.2401.02452

Bratton, B. H. (2016). The Stack: On Software and Sovereignty. MIT Press.

Butollo, F., Gereffi, G., Yang, C., & Krzywdzinski, M. (2022). Digital transformation and value chains: Introduction. Global Networks, 22(4), 585–594. https://doi.org/10.1111/glob.12388

Castle, D., Denis, M., & Samandar Eweis, D. (2024). Preparing National Research Ecosystems for AI: Strategies and progress in 2024. International Science Council. https://doi.org/10.24948/2024.06

Chen, C., & Shu, K. (2024). Combating misinformation in the age of LLMs: Opportunities and challenges. AI Magazine, 45(3), 354–368. https://doi.org/10.1002/aaai.12188

Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.

Fecher, B., Hebing, M., Laufer, M., Pohle, J., & Sofsky, F. (2023). Friend or foe? Exploring the implications of large language models on the science system. AI & SOCIETY. https://doi.org/10.1007/s00146-023-01791-1

Gibney, E. (2024). Has your paper been used to train an AI model? Almost certainly. Nature, 632(8026), 715–716. https://doi.org/10.1038/d41586-024-02599-9

Głowicka, •Ela, & Málek, J. (2024, März 18). Ela Głowicka & Jan Málek: “Digital Empires Reinforced? Generative AI Value Chain”. https://www.networklawreview.org/glowicka-malek-generative-ai/

Gray, M. L., & Suri, S. (2019). Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass (1st ed). HarperCollins Publishers.

Hogarth, I. (2018). AI Nationalism. Ian Hogarth. https://www.ianhogarth.com/blog/2018/6/13/ai-nationalism

Hooker, S. (2020). The Hardware Lottery (No. arXiv:2009.06489). arXiv. https://doi.org/10.48550/arXiv.2009.06489

Hwang, T. (2018). Computational Power and the Social Impact of Artificial Intelligence (SSRN Scholarly Paper No. 3147971). Social Science Research Network. https://doi.org/10.2139/ssrn.3147971

Ji, J., Qiu, T., Chen, B., Zhang, B., Lou, H., Wang, K., Duan, Y., He, Z., Zhou, J., Zhang, Z., Zeng, F., Ng, K. Y., Dai, J., Pan, X., O’Gara, A., Lei, Y., Xu, H., Tse, B., Fu, J., … Gao, W. (2024). AI Alignment: A Comprehensive Survey (No. arXiv:2310.19852). arXiv. https://doi.org/10.48550/arXiv.2310.19852

Klinger, J., Mateos-Garcia, J., & Stathoulopoulos, K. (2022). A narrowing of AI research? (No. arXiv:2009.10385). arXiv. http://arxiv.org/abs/2009.10385

Koebler ·, J. (2024, April 11). Is Google’s AI Actually Discovering „Millions of New Materials?“ 404media. https://www.404media.co/google-says-it-discovered-millions-of-new-materials-with-ai-human-researchers/

Komljenovic, J., Sellar, S., Birch, K., & Hansen, M. (2024). Chapter 8. Assetization of higher education’s digital disruption.

Komljenovic, J., & Williamson, B. (2024). Behind the platforms: Safeguarding intellectual property rights and academic freedom in Higher Education. Education International. https://www.research.ed.ac.uk/en/publications/behind-the-platforms-safeguarding-intellectual-property-rights-an

Lehdonvirta, V., Wu, B., & Hawkins, Z. (2023). Cloud empires’ physical footprint: How trade and security politics shape the global expansion of U.S. and Chinese data centre infrastructures. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.4670764

Luitse, D. (2024). Platform power in AI: The evolution of cloud infrastructures in the political economy of artificial intelligence. Internet Policy Review, 13(2). https://policyreview.info/articles/analysis/platform-power-ai-evolution-cloud-infrastructures

Luitse, D., & Denkena, W. (2021). The great Transformer: Examining the role of large language models in the political economy of AI. Big Data & Society, 8(2), 20539517211047734. https://doi.org/10.1177/20539517211047734

Manning, B. S., Zhu, K., & Horton, J. J. (2024). Automated Social Science: Language Models as Scientist and Subjects (No. 32381). https://doi.org/10.3386/w32381

Merton, R. K. (1942). Science and Technology in a Democratic Order. Journal of Legal and Politcal Sociology, 1, 115–126.

Messeri, L., & Crockett, M. J. (2024). Artificial intelligence and illusions of understanding in scientific research. Nature, 627(8002), 49–58. https://doi.org/10.1038/s41586-024-07146-0

Miceli, M., & Posada, J. (2022). The Data-Production Dispositif. Proceedings of the ACM on Human-Computer Interaction, 6(CSCW2), 1–37. https://doi.org/10.1145/3555561

Mirowski, P. (2018). The future(s) of open science. Social Studies of Science, 48(2), 171–203. https://doi.org/10.1177/0306312718772086

Mühlhoff, R., & Ruschemeier, H. (2024). Updating Purpose Limitation for AI: A normative approach from law and philosophy (SSRN Scholarly Paper No. 4711621). Social Science Research Network. https://doi.org/10.2139/ssrn.4711621

Muldoon, J., Cant, C., Wu, B., & Graham, M. (2024). A typology of artificial intelligence data work. Big Data & Society, 11(1), 20539517241232632. https://doi.org/10.1177/20539517241232632

Nix, N., Zakrzewski, C., & De Vynck, G. (2024, März 15). Silicon Valley is pricing academics out of AI research. The Washington Post. https://archive.ph/cuRNU

Nguyen, E., Poli, M., Durrant, M. G., Thomas, A. W., Kang, B., Sullivan, J., Ng, M. Y., Lewis, A., Patel, A., Lou, A., Ermon, S., Baccus, S. A., Hernandez-Boussard, T., Ré, C., Hsu, P. D., & Hie, B. L. (2024). Sequence modeling and design from molecular to genome scale with Evo (S. 2024.02.27.582234). bioRxiv. https://doi.org/10.1101/2024.02.27.582234

Plantin, J.-C., Lagoze, C., & Edwards, P. N. (2018). Re-integrating scholarly infrastructure: The ambiguous role of data sharing platforms. Big Data & Society, 5(1), 2053951718756683. https://doi.org/10.1177/2053951718756683

Pohle, J., & Thiel, T. (2020). Digital sovereignty. Internet Policy Review, 9(4). https://policyreview.info/concepts/digital-sovereignty

Rehak, R. (2024). On the (im)possibility of sustainable artificial intelligence. Internet Policy Review. https://policyreview.info/articles/news/impossibility-sustainable-artificial-intelligence/1804

Sastry, G., Heim, L., Belfield, H., Anderljung, M., Brundage, M., Hazell, J., O’Keefe, C., Hadfield, G. K., Ngo, R., Pilz, K., Gor, G., Bluemke, E., Shoker, S., Egan, J., Trager, R. F., Avin, S., Weller, A., Bengio, Y., & Coyle, D. (2024). Computing Power and the Governance of Artificial Intelligence (No. arXiv:2402.08797). arXiv. https://doi.org/10.48550/arXiv.2402.08797

Satariano, A., & Mozur, P. (2024, August 14). The Global Race to Control A.I. The New York Times. https://www.nytimes.com/2024/08/14/briefing/ai-china-us-technology.html

Scao, T. L., Fan, A., Akiki, C., Pavlick, E., Ilić, S., Hesslow, D., Castagné, R., Luccioni, A. S., Yvon, F., Gallé, M., Tow, J., Rush, A. M., Biderman, S., Webson, A., Ammanamanchi, P. S., Wang, T., Sagot, B., & Muennighoff, N. (2023). BLOOM: A 176B-Parameter Open-Access Multilingual Language Model.

Schrepel, T., & Potts, J. (2024). Measuring the Openness of AI Foundation Models: Competition and Policy Implications.

Schwartz, R., Dodge, J., Smith, N. A., & Etzioni, O. (2019). Green AI (No. arXiv:1907.10597). arXiv. https://doi.org/10.48550/arXiv.1907.10597

Shoaib, M. R., Wang, Z., Ahvanooey, M. T., & Zhao, J. (2023). Deepfakes, Misinformation, and Disinformation in the Era of Frontier AI, Generative AI, and Large AI Models. 2023 International Conference on Computer and Applications (ICCA), 1–7. https://doi.org/10.1109/ICCA59364.2023.10401723

Spence, S. (2019, April 10). The birth of AI nationalism. New Statesman. https://www.newstatesman.com/science-tech/2019/04/the-birth-of-ai-nationalism-2

Strubell, E., Ganesh, A., & McCallum, A. (2019). Energy and Policy Considerations for Deep Learning in NLP. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 3645–3650. https://doi.org/10.18653/v1/P19-1355

Sturgeon, T. J. (2021). Upgrading strategies for the digital economy. Global Strategy Journal, 11(1), 34–57. https://doi.org/10.1002/gsj.1364

The Public AI Network. (2024). Public AI White Paper. A New Approach To Public-Interest AI Investment.

University of Virginia and Centre for the Governance of AI, Korinek, A., Vipra, J., & University of Virginia and Centre for the Governance of AI. (2024). Concentrating Intelligence: Scaling and Market Structure in Artificial Intelligence. Institute for New Economic Thinking Working Paper Series. https://doi.org/10.36687/inetwp228

van der Vlist, F., Helmond, A., & Ferrari, F. (2024). Big AI: Cloud infrastructure dependence and the industrialisation of artificial intelligence. Big Data & Society, 11(1), 20539517241232630. https://doi.org/10.1177/20539517241232630

West, S. M., Whittaker, M., & Crawford, K. (2019). Discriminating systems: Gender, Race And Power in AI.

Williams, A., Miceli, M., & Gebru, T. (2022). The Exploited Labor Behind Artificial Intelligence. https://www.noemamag.com/the-exploited-labor-behind-artificial-intelligence/

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

× nine = 9

Continue reading

Generative AI in knowledge work

Generative AI in knowledge work

As generative AI applications in science proliferate they prompt a self-reflection on work routines in scientific knowledge production. At the same time, scientific institutions, publishing bodies, and funding agencies are confronted with both regulatory challenges and the task of promoting the use of generative AI in line with good scientific practice.