ChatGPT en la educación superior de los países nórdicos
Marzo 19, 2023

Universities adjust to ChatGPT, but the ‘real AI’ lies ahead

Like most countries in the world, the Nordic nations of Denmark, Sweden, Norway, Finland and Iceland have been hit by the recent ChatGPT hype and their universities are grappling with how to deal with it – and how to benefit from the new technological leap it represents.

In Denmark, some universities have already moved to ban the use of the controversial chatbot in examinations.

“One has to do one’s exam alone and one is not alone if one is using artificial intelligence,” Deputy Director at Aarhus University Anna Bak Maigaard told DR. The university announced the ban on 3 January.

The Technical University of Denmark has also officially banned the use of artificial intelligence in exams. Professor Philip John Binning, the university’s dean of graduate studies and international affairs, said the institution does not have a good recipe to handle the issue. He described ChatGPT as “contributing to the creation of a new reality” that required the university to think about how to actively use it in an effective way.

At the IT University of Copenhagen, Head of Student Affairs and Programmes Lene Rehder said the institution had not come up with an official regulation banning the use of chatbots during exams, but it would not be permitted during “unaided” examinations.

“The changes we see now are similar to those we saw when the internet came, which was also revolutionary and changed a lot of things,” she said. “Use of the technology will depend on what kind of exam it is. When we have exams without any aids, it will not be allowed.”

Differing views

However, not all Danes agree.

In Akademikerbladet magazine, on 13 February 2023, Hans Stokholm Kjer, a professional project manager and freelance commentator, said it was a declaration of defeat to refuse students the use of ChatGPT, including during exams.

“The AI robot ChatGPT has demonstrated it has many interesting implications for our way of thinking and we experience how close it is for a machine to come towards the human way of thinking and communicate knowledge,” he wrote.

Kjer said ChatGPT was “wrongly characterised as intelligent”, but in fact was simply “extremely good at sorting out and reproducing information” which it had been fed. “And it will become better and better” at doing so, he said – which made it all the more important to fully embrace the technology.

“We do not want to see graduates who are educated to prepare for 1950. On the contrary, we want graduates who can guide us through the developments safely towards and beyond 2050,” Kjer wrote.

“So, if exams at universities are meant to give a picture of how good the students are, they can also demonstrate how good they are at choosing the right tools, among these, AI, where it is meaningful,” he argued.

However, when ChatGPT was managed in university settings, it was important that there be “unambiguous rules and regulations governing its use”, according to Professor Hanne Leth Andersen, rector of Roskilde University and head of the education policy committee at Universities Denmark, an association of universities.

“For a start, the sudden leap in the development of AI manifested with ChatGPT applies pressure to universities to react quickly,” Andersen told University World News. “The chatbot will influence how we teach and how we conduct exams.

“At Universities Denmark, the position is very clear concerning exams: universities need to have unambiguous rules and regulations concerning how and when the use of these technologies is allowed and when not. This is very closely linked to the specific learning objectives in each course and programme.

“Concerning teaching, we need to be able to both understand and integrate new tools – something we are currently doing at our universities. Even though we are just starting the dialogue, there is no doubt that this will be a powerful and influential technology in the future.

“Students need to know how it works and how to use it – as well as when it is useful and when it is not. In education, the goal is not always finding the right answer, and in academia, it is central that you are able to track your methods and know your references,” she said.

Andersen said while the extended use of AI may seem to some extent “like science fiction”, adapting curricula and teaching methods to technological breakthroughs was not new.

“In the recent past, we have done so with computers, pocket calculators and the internet, just to mention a few. I am sure we will manage this as well, with due diligence and with respect for academic quality, upon which we build our education.”

Sweden: An early infringement

Although new, the technology is already being taken up by students and is presenting universities with practical challenges.

In the first case of its kind, a student at Uppsala University in Sweden has received a warning for using ChatGPT during an examination, according to a report in Universitatslararen.

The disciplinary committee concluded that the student had “tried to mislead the examination board by using ChatGPT to complete the examination”.

The student was given a warning instead of being expelled since the examination tasks for which the chatbot was used only constituted a small part of the examination.

Professor Mikael Wiberg, of Chalmers University of Technology and Umeå University, confirmed to Universitatslararen that he had received papers from students that refererence ChatGPT, but the universities’ policy on its use was not yet formalised. “The application of AI tools is so new that we do not have any formulated policy on how to handle it,” he said.

In a more recent article in Univarsitatslararen, on 23 February, headlined “The use of ChatGPT is similar to ghostwriting”, several professors interviewed said that they have to teach students how to relate to new AI tools, for instance in courses in programming.

Martin Duneld, a lecturer at the department of computer and systems sciences at Stockholm University, told forskning.se that the new technology might make exams much more expensive.

“The technology has advanced extremely fast over a short time and I think that it will continue to do so. To ensure that the students have learned what we want them to learn, we have to break the trend with online exams. Instead, we have to return to supervised examinations on campus,” he said.

For some academics, like Anders Isaksson at Chalmers University of Technology, ChatGTP means the development of a new set of skills around AI prompts.

In December 2022 the associate professor of technology management and economics published a blog written extensively by ChatGTP that addressed some of the concerns about ChatGTP, including its use during exams.

“I used ChatGTP for all the text produced in this blog. My personal knowledge about artificial intelligence in general and of ChatGPT in particular is quite limited.

“The only sentences I wrote myself here are the two that are in this paragraph… (which may explain the poor English in this section),” he wrote.

Speaking to University World News Isaksson said: “One thing I have learned is that with ChatGTP it is not only about writing; it is also [about] what you are prompting [that is, the commands and questions you feed it]. When using ChatGPT or other AI language models, crafting effective prompts is essential for producing high-quality output.

“While ChatGPT generates the text, it’s the writer who provides the prompts and structures the resulting output. Therefore, learning how to create effective prompts is a crucial skill for anyone using these services. As such, understanding how to prompt the AI language model will likely become a highly valuable skill for writers in the future.”

Norway: A pro-active strategy

In Norway, seminars to discuss ChatGPT and its impact on research and higher education were held last month at the University of Bergen, on 3 February, and at the University of Agder on 8 February.

Professor Arild Raaheim of the University of Bergen told the first seminar that reactions to the launch of ChatGPT were characterised by “obdurate arguments” and “overly dramatic reactions”.

“Broadly we can see three different strategies or reaction patterns in meeting the new technological challenges,” Raaheim said.

The first was paralysis or a “wait-and-see strategy”. One example was the decision of the Norwegian University of Science and Technology to go ahead with the spring examinations as planned. Such a strategy, he said, was aimed at not rocking the boat and hoping things would normalise – which they seldom will do.

The second was over-dramatisation with a focus on negative consequences followed by proposals of restrictions. “This often led to missed opportunities and a flowering of obdurate arguments for yesterday’s alternatives and more restrictions,” he said.

“These worries are equal to putting your head in the sand,” he said in response to those who suggested strengthening the traditional exam. Raaheim compared a fear of ChatGPT with a resistance to the use of the PC during exams.

A third reaction, said Raaheim, was a pro-active strategy, which contained some danger of “an over-optimistic belief in the future”, but where the new, in combination with the traditional, could “strengthen the student’s learning and critical reflections”.

In an interview with Khrono, Barbara Wasson, professor and head of the Centre for the Science of Learning and Technology at the University of Bergen, said she was also aware of researchers using the technology to edit and improve their own writing.

Asked how this could be done in a responsible manner, she echoed Isaksson when she said: “We have to be better at communicating with the chatbot. We have to have a good knowledge of our research to know what the truth is. And we have to develop our critical sense.”

In an interview with University World News, Professor Morten Goodwin, a professor in the department of ICT at the University of Agder, said the most significant benefit for universities of ChatGPT was as a “directed writing tool”.

“In the short term, ChatGPT can help us all in the writing process … We can give it some keywords. ChatGPT helps with the first draft of, for example, the abstract.

“We can also get it to ensure the flow of the paper, identify parts of the text that need further clarification, find superficial logical flaws, get suggestions for additional arguments, and much more.

“It does not have the same scientific rigour as us scientists, so it should not be used as more than an intelligent but informed sparring partner.”

Goodwin said in the longer term, we could expect to see the development of tools that are tailored for scientists. These included “adequate writing and arguing at the scientific level, adding proper references, asking for experiments, and validating hypotheses based on the data provided. However, this still needs to be put in place.”

Finland: A change and an opportunity

Finnish universities have taken a similar approach to ChatGPT as their regional neighbours.

Kai Nordlund, professor and dean of science at the University of Helsinki, where new regulations concerning the use of large language models were approved on 16 February by the academic affairs council, told University World News the university viewed large language models like ChatGPT as “a change and opportunity for university teaching”.

Like Goodwin, Nordlund recognised the likelihood of even more sophisticated language models in the future when he said: “It is foreseeable that more such models will emerge, and their functionalities will continue to evolve, so their existence should be taken into account in university teaching and research.”

Nordlund said the guidelines make it clear that teachers are encouraged to use AI in their teaching and to prepare students for a “society of the future where AI methods will be widely used”. However, he said use may be restricted where it does not promote student learning.

“If a language model has been used to help produce the work to be returned, the student must indicate in writing which model has been used and in what way.

“This also applies to theses. Large language models or other AIs must not be named as authors of the text or other written output, as the AI cannot take responsibility for the content of the text. The responsibility for the linguistic and factual correctness of all written material lies with humans.”

Professor Teemu Roos, based in the department of computer science at the University of Helsinki, said the new university guidelines were “sensible”.

“They basically explicitly state that the rules about copying and paraphrasing text from various sources also apply to ChatGPT. In a way, this is not news, since it’s always been required that sources be stated and copied text indicated clearly,” Roos told University World News.

The guidelines say the existence of large language models should be seen as an “opportunity”.

However, they state: “As AI brings new possibilities for producing text whose origin and reliability is unclear, they should be used in a controlled way. Use may be restricted in teaching in situations where the use would not promote student learning.

“At European Union level, an AI regulation is under preparation, which will also apply to AI systems in education. In addition, there is an ethical policy on AI and its use, as well as an ethical code for teachers. The university’s guidelines may be further specified in the light of future regulation and technological development.”

Iceland: A new framework

In Iceland, an open framework on ChatGPT for universities has been developed as a first step towards dealing with new AI technology. The framework was developed by a working group led by Dr Katrín R Frímannsdóttir, head of quality management at the University of Iceland, who met with representatives from all universities in Iceland.

The framework acknowledges that when used correctly, artificial intelligence can be a powerful tool for simplifying and expediting academic work, but its use must comply with all university requirements, both technically and ethically.

The framework states that the use of artificial intelligence in academic work is subject to the same rules as the use of any other sources or assistance and the abuse of artificial intelligence is subject to the same rules as any other form of academic misconduct. The origin must be stated, and citations and sources must be used in accordance with university guidelines and standards.

Jón Atli Benediktsson, rector of the University of Iceland and chair of the Icelandic Rectors’ Conference, said it was essential that universities work together, share perspectives, and reach a common understanding.

Preparation for ‘real AI’

Speaking on the limitations of AI in its current stage of development, Reykjavik University’s Dr Kristinn R Thórisson, who has been researching AI in academia and industry for over 30 years, told University World News that “contemporary deep learning technologies are based on statistical methods and, as a result, they do not really understand the topics they produce text about, nor do have any general idea about how the world works”.

As a result, any student who uses ChatGPT or similar technology based on deep neural networks (DNNs) to do their homework or to answer exam questions is doing “double damage”.

“They are wasting teachers’ time by presenting mindlessly generated work as their own, and they are foregoing the education and training that exams are intended to ensure.

“Additionally, they are risking their own reputation because DNN-based chatbots are notorious for producing incorrect information; in fact, it is not possible to be sure that the text thus generated is correct, other than to verify it, which of course defeats the purpose of their use in the first place.”

Thórisson, who is research professor of computer science at Reykjavik University and co-founder of the university’s AI lab CADIA, which he co-directed from 2005 to 2010, and managing director of the Icelandic Institute for Intelligent Machines, said: “To this extent, contemporary DNN methods for cheating are severely limited and pose less of a threat than may seem at first.

“Where these technologies may be useful, however, is for preparing us for a future where machines have real intelligence – ‘real AI’ – and could thus pose real threats to numerous aspects of our social structure, including education.

“Part of the benefit of automation that this might enable will undoubtedly have to go towards ensuring, or making up for, other aspects of society that will need mending or modification of some sort.”

0 Comments

Submit a Comment

Tu dirección de correo electrónico no será publicada. Los campos requeridos están marcados *

PUBLICACIONES

Libros

Capítulos de libros

Artículos académicos

Columnas de opinión

Comentarios críticos

Entrevistas

Presentaciones y cursos

Actividades

Documentos de interés

Google académico

DESTACADOS DE PORTADA

Artículos relacionados

Punto cero de la política

Punto cero de la política "Es una lucha sin cuartel. Y, por ende, sin cuidado por la gobernabilidad de la democracia" José Joaquín Brunner, Viernes 22 de noviembre de 2024 Se confunden los planos cuando se sostiene que la pasada elección de octubre 26 y 27 muestra un...

Share This