Research explores how AI might change knowledge production
In two major studies in Denmark, experts in computer science and in the sociology of science are exploring how researchers are using generative AI – and how rapidly evolving AI tools might change ways in which scientific knowledge is produced and diffused.
The research will map the growing influence of AI on science and how AI technology is spreading across scientific communities. And it will analyse AI’s impacts currently and in the future. It will combine computational social science methods with controlled experiments.
“Development is moving so fast at the moment. But right now, we have a window of opportunity where we can compare the knowledge produced by humans with the work of AI,” said Professor Roberta Sinatra of the University of Copenhagen in a university article.
“Ultimately, the aim of the project is to give us all a much clearer picture of the new and unexpected consequences of AI-infused science.”
The research
One of the two studies is “Qualifying the Prevalence and Diffusion of Generative AI in Science”, which will run during 2024 and 2025. It is funded by the Villum Fonden with a grant of nearly DKK3 million (US$437,000) from the Villum Synergy programme.
The co-leaders are Roberta Sinatra, who is with the University of Copenhagen’s Center for Social Data Science as well as IT University of Copenhagen, and Associate Professor Mathias Wullum Nielsen of the department of sociology at the University of Copenhagen.
Sinatra is also leading another, bigger project funded by a Consolidator Grant of US$2 million from the European Research Council, for a study titled “Qualifying AI-infused Science” (scAIence). It will run for five years from 2024 and will include three PhD students, three postdocs and one student assistant in addition to project leader Sinatra.
scAIence will complement the Villum Synergy project, according to the University of Copenhagen. The Villum research measures how AI is spreading among scientists, while the scAIence research investigates how AI is transforming and shaping scientific knowledge.
“We will investigate how papers and other scientific publications produced by AI differ from human scientific writing. What are the patterns and biases between AI and human output? And to what extent does the use of AI in science create new challenges or, in some cases, improvements,” Sinatra is quoted by the university as saying.
University of Copenhagen articles last October and November point out that generative AI tools like ChatGPT have quickly been used by researchers, to a point where some papers are co-authored by AI.
Sinatra said that while generative AI is already used by academics for a range of purposes, not much is known about how or why it is being used, for what reasons and to what extent. “Our goal is to unveil these aspects, ensuring the scientific community is prepared for the broader implications of AI’s pervasive integration.”
The scAIence research
The aims of scAIence are to quantify whether, how and with what effects generative AI is changing the way scientists perceive, write, communicate and disseminate science – and to explore opportunities, threats and consequences of scientists augmenting science with AI.
Sinatra and Nielsen’s key objectives are to provide evidence on overt and covert use of large language models (LLMs); analyse trends in the usage and prevalence of LLMs across disciplines; and predict the diffusion and adoption of generative AI in scientific networks.
According to the University of Copenhagen, the two-year scAIence research will include a questionnaire targeting 200,000 publication-active scientists to investigate the use of large language models. The survey will cover both overt and covert, potentially problematic uses of AI.
The researchers will examine large collections of scientific texts to quantify trends in overt and covert use of AI-generated content. This will involve techniques that can identify text covertly written by AI models such as ChatGPT.
Sinatra said: “We’ll employ a blend of surveys, data analysis and network modeling to estimate the undisclosed adoption of AI. This approach will let us determine whether AI’s involvement is associated with more impact and consequently advances the progress of science.”
Mathias Wullum Nielsen added: “Large language models will likely increase the pace of scientific discoveries in many fields, but we need to ensure that scientists use these models in trustworthy and transparent ways. If they don’t, scientific knowledge may lose its legitimacy in broader society in the long run.”
Professor Kjetil Rommetveit, of the Center for the Study of the Sciences and the Humanities at the University of Bergen in Norway, congratulated his colleagues in Denmark on their new research and told University World News that he looks forward to seeing the outcomes.
“Although AI has been with us for many years, as promised and sometimes also working technology, the ways in which LLMs merge with and change scientific practices are novel and still-emergent phenomena. I hope that the project will succeed in highlighting the overt and covert uses of the models so as to help scholarly activities in productive ways.
“As a scholar of science, technology and society, I also hope that this will include explicit and implicit social and political assumptions and choices built into these models. Issues of bias, representativity, inclusion and exclusion will unfold not merely in relation to the models, but relative to infrastructural, economic and political power, ” Rommetveit said.
“Important here are strong tendencies to scale up in order to incorporate the greatest possible amounts of content and data: how will this shape scientific activities, and the social roles and uses of the sciences?”
How well does AI do science?
The research will use a variety of methodological approaches.
A cornerstone will be to train AI to write thousands of literature reviews and scientific paper abstracts by feeding AI models with topics and keywords found in existing academic papers written by humans and published between 2021 and 2022.
They are therefore neither ‘polluted’ by AI-generated text nor yet fed into the AI models. The human-written papers will then be compared with their AI-generated counterparts, the University of Copenhagen explained in the articles.
This approach will allow researchers to assess how AI models process a wide range of current scientific topics and questions compared to humans. It will also provide insight into how AI deals with the social aspects of scientific writing.
Sinatra said social factors play an important role when researchers decide what kind of existing knowledge to acknowledge and include in their scientific production.
“One example is recency. Pilot studies suggest that humans give more weight to recent studies than AI models, which tend to ‘flatten’ the importance of old and new knowledge, which is problematic.
“On the other hand, AI can broaden the scientific perspective in other ways. The project will provide a more accurate picture of such differences, which may include disparities or biases related to gender, language and other factors,” she said.
Using advanced computational methods, the study will even attempt to measure systemic biases by developing new metrics in this area.
In addition, controlled experiments will investigate human perception of artificial intelligence. According to the University of Copenhagen material, one idea is to test how well conference abstracts written by AI perform in peer reviews. Another will be to present experts with two similar abstracts – one written by AI, one by a human – and see if they can tell them apart.
Positive responses
Professor Morten Goodwin, of the department of ICT at the University of Agder in Norway and deputy director of its Centre for Artificial Intelligence Research, told University World News: “Projects like scAIence are vital for academic AI research as they provide empirical data on AI’s current and potential uses in science, helping to shape guidelines in academia and understand its transformative impact. It also prepares the academic community for AI’s evolving role, ensuring its effective and responsible integration into future research.”
Pekka Abrahamsson, a leading professor of software engineering at Tampere University in Finland and founder of its pioneering GPT Lab, told University World News that Sinatra’s work was timely and necessary.
“It aligns with generative AI’s role in enhancing scientific exploration, discovery and problem-solving. Large language model technology reduces repetitive tasks and manual labour, leading to more reliable scientific results and better communication of discoveries. Scientists must learn to use these new technologies,” Abrahamsson said.
Professor Sylvia Schwaag Serger of Lund University in Sweden – a member of the newly established Swedish government AI commision and a former director of Vinnova, the Swedish government innovation agency – told University World News: “This work is well positioned to enhance our understanding of how generative AI will impact science, and the multidisciplinary approach combining computer science expertise with sociology of science is promising and essential for intelligently approaching this question.”
“In addition to its effects on research, AI is shaking up the research funding system,” Schwaag Serger said, drawing attention to an October 2023 Nature article by Joan Manuel Parrilla arguing that AI’s ability to do much of the work of complex and time-consuming grant applications shows that the current system is broken.
“Even more importantly, AI is set to fundamentally disrupt education and teaching. It will require new approaches to learning and teaching, but also to curricula, degrees and skills formation. This will be particularly important, and challenging, in and for social sciences and humanities.
“In addition to asking ourselves how AI will change the way we teach, we need to ask ourselves what skills we need to equip our students with to be able to understand, utilise and govern AI to the benefit of society,” Schwaag Serger said.
0 Comments