Bold ideas and critical thoughts on science.

Beck, Poetz & Sauermann on using AI tools when developing novel research ideas as input for writing grant proposals during an experiment at the OIS Research Conference 2022.

Abstract

Susanne Beck

Artificial intelligence (AI) is increasingly contributing to scientific breakthroughs in many fields. It is also clear that openness and cross-disciplinary collaboration are becoming key features of the process of modern science. Yet, we know little about the intersection of these two developments – whether and how AI may shape openness and collaboration in research. We, a group of scholars in the fields of science and innovation studies, engaged in an experiment as part of the 2022 OIS Research Conference: we collaboratively developed research ideas for a grant proposal, first without and then with the help of an AI tool. The results from the experiment indicate that AI is potentially useful for identifying and sharing relevant knowledge and for collaborating with knowledge actors outside the boundaries of scientists’ own scientific fields, but it is not (yet) able to fully leverage this potential. At the same time, using AI tools may change the nature of scientific collaborations in subtle ways and lead to behavioral and cognitive challenges for scientists. The experiment raises new research questions regarding how collaborations between human actors and AI can be organized in a productive way.

„A critical area of future science of science research concerns the integration of AI so that machines and minds can work together.”

Wang & Barabási (2021, p. 237)

Artificial intelligence (AI) in science

Humans have been thinking about the use of artificial intelligence (AI) since ancient times (Mayor, 2018). What started with a Greek (mythical) automated giant protecting the island of Crete has now become a ubiquitous topic that is reshaping a large fraction of our daily practices, interactions, and environments (Taddeo & Floridi, 2018). The story of AI is not linear, however, and periods of great enthusiasm have been followed by disappointment about slow improvements in AI capabilities. Even Nobel-prize winning genius Herbert Simon got it seriously wrong when he claimed in the 1960s that machines would be able to do any work that humans could do within 20 years (Simon, 1965).

In general, AI refers to intelligence demonstrated by machines based on ‘machine-learning’ or, more recently, ‘deep neural networks’. Even though AI has not yet completely relieved humans of the pains (and pleasures) of work, there is now clear evidence that it can improve efficiency in many areas, including production and operations, human resource management, marketing, and medical diagnosis.

But how will AI affect scientific research? Scholars of science have considered two quite different paths through which AI may change how science is done (Wang & Barabási, 2021). First, AI can increase the efficiency of the scientific research process in areas ranging from information retrieval (e.g., Google Scholar) to the automation of labor-intensive tasks (e.g., protein folding). Contemporary examples include the use of AI to dramatically increase the speed of drug discovery (Chen, Engkvist, Wang, Olivecrona, & Blaschke, 2018) or to analyze massive amounts of particle accelerator data (Ciodaro, Deva, De Seixas, & Damazio, 2012). The use of AI in such tasks is also welcomed by scientists, as it allows them to save time and focus on more interesting and creative tasks. Second, AI can increasingly produce high-speed, creative solutions to complex problems. While these advancements may appear threatening (to scientists’ identities, at least), they may be “moving science forward with a speed and accuracy unimaginable to us today” (Wang & Barabási, 2021, p. 235). Scholars of science have a lot of work to do in order to better understand the evolving role of AI in science, and especially the interplay between human scientists and AI in various aspects of the research process (Raisch & Krakowski, 2021). This includes the use of AI in scientific collaborations and as potential means for fostering openness in science. With this in mind, we now turn to a more specific question:

How will AI influence openness and collaboration in science?

Marion Poetz

One of the major opportunities – and challenges – in science is to identify and share relevant knowledge and to collaborate with knowledge actors outside the boundaries of scientists’ own organizations, communities, or scientific fields. In the context of scientific research, inbound, outbound, and coupled boundary-spanning knowledge flows have recently been summarized as “open and collaborative research practices” (Beck, Bergenholtz, et al., 2022). Such practices (e.g., open access publications, data sharing and re-use, or inter- and transdisciplinary collaborations) are increasingly seen by scholars and funders as a promising approach to reducing incrementalism in science, increasing scientific productivity, and producing more impactful research (Jones, 2009; Pammolli, Magazzini, & Riccaboni, 2011). But open and collaborative practices face several difficulties, especially when knowledge needs to cross disciplinary or epistemic boundaries (Beck et al., 2021). For example, finding distant collaboration partners and organizing the collaboration productively requires additional resources (e.g., coordination costs between actors, preparing data, codes, or results to be shared or reused). Inter-disciplinary research is also less likely to receive funding and has a higher risk of failure (Bromham, Dinnage, & Hua, 2016; Franzoni, Stephan & Veugelers, 2022; Rhoten & Parker, 2004). How, if at all, can AI help to overcome challenges involved in using open and collaborative practices in science? More specifically, can it facilitate practices such as the open sharing and re-use of data and results, scientific collaboration across fields and institutions, or collaboration between scientists and crowds? What might go wrong? What might be trade-offs and boundary conditions of using AI if the goal is to increase the productivity but also the societal impact of science?

To explore these questions, we, a group of scholars in the fields of science and innovation studies, engaged in an experiment: We aimed to collaboratively develop novel research ideas as inputs to a grant proposal in our field, first without the help of AI and then by using an AI tool to support us. We chose this particular task because experts continue to question AI’s capability to not only solve existing problems but also to identify new problems (Wang & Barabási, 2021). Moreover, open and collaborative practices are particularly valuable in early stages of the research process, such that finding out whether and how AI can help at that point seems particularly important (Beck, Brasseur, Poetz, & Sauermann, 2022; Franzoni, Poetz, & Sauermann, 2022).

Henry Sauermann

We ran the experiment in a special session during the annual Open Innovation in Science (OIS) Research Conference. This conference provides scholars interested in the role and value of openness and collaboration in science with an opportunity to meet and exchange their research. The most unique feature of this conference is that it also allows participants to experiment with incorporating open and collaborative practices into their own research. In 2022, the conference took place in a hybrid format at CERN IdeaSquare.

The different stages of the experiment were as follows: First, all participants were randomly assigned to teams of 5 or 6 researchers and given the task of collaboratively developing an idea for a new research proposal in a more ‘conventional’ way (without the use of an AI tool). Second, all teams were granted exclusive access to a beta version of a new AI tool (Iris.ai Researcher Workspace) and asked to either develop a new research idea or refine the idea previously developed. Iris.ai Researcher Workspace is an AI-based platform for document processing that aims to make the knowledge overflow in science more manageable (Iris.ai, 2022a). It has access to different databases covering open access articles, research funding applications, and US patents, and it also allows scholars to upload their own list of documents. The core functions of this natural language processing tool include 1) an explorative literature search, where the tool takes the content of the input text or document and matches it to all documents in the selected database(s); 2) a variety of filters and tools for smart document analysis that allow scholars to navigate a large document set, for instance by providing a description of the context one is looking for; and 3) a summarization feature that writes summaries of specific articles or a set of articles (Iris.ai, 2022b). Third, all teams were asked to internally reflect upon their experience, particularly by comparing differences in the ideation process and outcome with and without the use of the AI tool. Finally, all teams presented their reflections in a short pitch to other participants and, by doing so, collaboratively produced insights into how AI influences openness and collaboration in science. Each pitch was structured along three guiding questions: 1) Did the AI help you in developing your novel research idea and, if so, how?; 2) Did the AI compromise your ability to develop a novel research idea and, if so, how?; And 3) How did you experience the AI tool to influence openness and collaboration in scientific research?

The following section synthesizes what the conference participants presented as their main reflections in the final pitching session. Impressions from the process and key insights of the experiment are further summarized in this video.

Note: AI-generated intro and outro graphics created using Disco Diffusion.

Insights on how AI influences openness and collaboration when developing novel research ideas

1. Finding distant knowledge and novel research ideas

Participants in the experiment felt that the AI could help them to overcome disciplinary and epistemic boundaries in the identification of a new research topic. One mechanism was that the AI’s search input process forced them to structure their search by describing the problem in detail. In doing so, the AI made them aware of implicit assumptions as codified in the terminology and concepts they were using in their input text to initialize the search. As one participant formulated it, “[the AI] increased [our] awareness of the implications of the words that we were choosing, so analytical awareness about how you frame the question. It encourages you also to experiment with (…) different ways of formulating and using different analytical wording.” However, the literature suggested by the AI (i.e., the output of the explorative search) was received by the participants with a certain level of distrust given that the selection process remained a black box. Although the developers of the AI envisioned the results from the explorative search more as inspiration for pursuing new opportunities rather than as a bullet-proof literature review, the conference participants appeared to expect the latter. Thus, the mere presentation of articles from sometimes even far outside the participants’ primary field of research was not considered sufficient to develop a boundary-crossing, novel research idea because scholars could not evaluate the quality of the output and connect it with their own knowledge, which ultimately prevented further knowledge recombination. This was mitigated, though, when participants already had a rough idea about what they were looking for. In that case, they were better able to connect the suggested (distant) literature with their own knowledge, allowing them to better evaluate the relevance of the literature and making the filtering and sensemaking process less resource-intensive. Hence, recombining the output produced by the AI with their own knowledge was what produced more novel research ideas, not the AI’s output alone. Without a clear idea and the lack of ability to properly evaluate the literature, some participants felt that the amount of literature produced by the AI was too much to pursue, creating excess noise: “as we were working individually on it we found it was taking us down a rabbit hole, because we were zooming in more and more into a silo, and actually what we were hoping was to use it as a boundary-spanning tool.” Not being able to properly evaluate suggested articles from distant fields also posed other challenges, including that participants found it difficult to judge the scientific quality and merit of suggested articles. As a consequence, they were not sure whether they should take the results seriously or how to interpret them. Relatedly, some expressed a concern that the AI’s recommendations might be biased by a dominant philosophy or assumptions made in a distant, unfamiliar field. With scholars being unable to detect such biases in the output of the AI, the resulting research ideas might suffer from the same unobserved tendencies.

2. Effects on collaboration activities 

Participants reported that the AI made them aware of potentially interesting and relevant future collaborators that they had not thought of before. More specifically, by seeing unknown literature streams, the teams got exposed to potential collaborators in these more distant fields, as well as collaborators from outside of academia (e.g., groups involved in or objects of that research such as patients, engineers, or lawyers). At the same time, however, the process of identifying novel research ideas together with the help of the AI also took away opportunities for discussing ideas with current collaborators and even team members sitting at the same table, as they were using the AI individually rather than as a team. Hence, the way in which scholars developed a new idea with the AI differed from how they developed a new idea with colleagues: while ideas were continuously refined in a cumulative way when discussing them with colleagues (i.e., building upon each other’s arguments), the idea development process between the scientist and the AI happened in an iterative, non-cumulative way (i.e., each search round started ‘from scratch’ without building on the prior search). Building on this insight, participants emphasized the need to understand the consequences of these differences, and how they may depend on other factors. For instance, one participant mentioned that the extent to which using the AI can foster or hinder collaborative approaches to research may depend on the cognitive thinking styles of individual researchers. Another boundary condition may be the stage of the research process (i.e., identifying a novel idea vs. concretizing the research question behind the background of existing research). For example, participants more often expressed positive opinions about using the AI during the identification of a research topic, while its usefulness for reviewing the literature, in a conventional sense, was more often considered to be limited. As such, two teams specifically applied a sequential approach in order to overcome the perceived disadvantage towards novelty associated with the non-cumulative search: they first searched individually for ideas using the AI and subsequently discussed the resulting ideas in the team in order to re-introduce the cumulative element. Thus, it seems that involving an AI requires not only a better understanding of the AI, but also behavioral and cognitive changes on the part of the scientists, including thoughtful approaches to integrating artificial and human intelligence.

3. Effects on other open practices

Reflections on how the AI influenced open practices other than those mentioned above were manifold. As for their own way of working, participants “found some kind of paradox in that [being confronted with] new concepts open up our minds, but they could also (..) hinder some communication in the group”. Hence, different members of the team may handle the output of the AI, i.e., the distant knowledge pieces, differently, connecting it to unrelated knowledge in their respective minds. As a consequence, this can further complicate communication within the teams and hamper knowledge sharing. In line with this, while the use of the AI can help to identify new collaborators, it may also hinder the motivation and perceived necessity of reaching out to potential collaborators in the first place, as the AI is thought to be providing the distant knowledge that scholars are seeking. As one participant mentioned “maybe it reduces [openness] because I used to have to go to an expert in that other field and ask him or her what’s going on in that field, what do you know, what might be relevant? Now I’m just sitting at a computer and, kind-of talk to a computer, and not collaborate anymore with other people.” At the same time, the motivation to open up their research projects in such a way may also suffer if the initial amount of information provided by the AI turns out to be simply overwhelming, requiring substantial resources to filter and analyze the output. Lastly, participants also pointed towards a “pull” effect of the AI for openness in scientific research: if such AI tools become the “dominant platform for search”, and given that many AI tools can only access open access publications, more authors and journals may feel the need to publish open access because “papers that are not open access will not get cited and explored.

Conclusions

Returning to the two paths through which AI may influence science, participants recognized the potential of AI to both help make research more efficient and to identify novel research problems. At the same time, the experiment made clear that many had unrealistic expectations regarding the outputs produced by AI, while underestimating the effort required to process and make use of the output. While one reason might be the limited time that scholars could interact with the AI, the experiment also highlighted the need to further understand how interactions between human actors and an AI can be organized in a productive way.

With respect to the influence of AI on openness and collaboration in science, the results from our experiment indicate that:

  • AI is potentially useful for identifying new knowledge and potential new collaborators across boundaries, but it is not (yet) able to fully leverage this potential. AI makes it easier to search and identify relevant knowledge (and potential collaborators) across boundaries. However, AI-generated output is not “ready to use” – it requires considerable human effort in order to evaluate its suitability and to make sense of it. This presents a rich opportunity for future research to develop better AI (i.e., AI that can better predict which outputs are helpful for researchers) as well as for mechanisms that facilitate the interaction between human actors and AI. Another constraining factor for leveraging the potential of AI as a boundary-spanning tool is that many current AI solutions are still limited to open access publications (Fortunato et al., 2018). The rise of AI tools in scientific research, however, may eventually boost open access publications and thus increase the scope of the relevant search space, as well as the accessibility of scientific knowledge in general.
  • Using AI tools to successfully develop novel ideas may require changes in team dynamics and collaboration patterns and may lead to behavioral and cognitive challenges for scientists. While AI facilitates some aspects of collaboration, it also creates new coordination costs and may undermine collaboration among existing partners by replacing distant collaborators’ expertise. Thus, more research is needed in order to better understand antecedents and contingencies of collaborations that involve human actors and AI (e.g., thinking patterns, procedural optimizations, or roles) and to identify potentially more effective approaches to interacting with the AI. Future research also needs to address pressing questions related to responsibilities and ownership of AI contributions: If an AI makes a substantial contribution, for example to the development of a new research proposal, how are the positive downstream consequences attributed and value captured, and who should be held responsible for any negative consequences?
  • Finally, participants’ reflections on the actual process of using the AI suggest two areas for potential improvements. First, it appears that teams who were more quickly able to grasp certain features of the tool were more satisfied with the output they received from the AI – either because results more closely matched their needs and expectations or because they better understood the mechanisms behind the search. Hence, the easier it is for users to understand the functionality of AI tools, the more likely they are to build upon and integrate the distant knowledge opportunities that the AI brings to the table. Second, in line with the idea of cumulative idea development, participants expressed the desire to interact more intensively with the AI. For example, they requested features that enable them to further manipulate the output given by the AI in order to explore boundary-crossing relationships between topics, themes, and fields. Enabling the integration of multiple search processes would allow the human-AI pair to connect the new (distant) knowledge with scientists’ existing knowledge, making it easier to span boundaries. Building on the same argument, the current setup of AI as designed for individual users could be expanded so as to enable direct use by multiple team members.

We hope that these insights from our 2022 OIS Research Conference Experiment help shape future research by pointing towards untapped opportunities, potential risks, and boundary conditions for using AI in science, especially related to openness and collaboration. The next large jump in AI capabilities might be just around the corner – scholars should be ready to study and help guide the associated changes in the scientific research enterprise.