Generative AI action hints at core future roles in universities
With the arrival of generative AI such as ChatGPT, science fiction took a big step towards fact. Last year, universities explored the implications of AI. This year kicked off with innovative Arizona State University in America partnering with ChatGPT’s creator to advance learning, research and services – hinting at core roles for AI in universities in future.
Arizona State University (ASU) announced on 17 January 2024 that it was the first university to forge a partnership with OpenAI, the company that developed ChatGPT. The ChatGPT Enterprise platform will be integrated into the institution with a focus on “enhancing student success, forging new avenues for innovative research and streamlining organisational practices”, it said.
The university will develop personalised AI tutors and study-help avatars for students, who overwhelmingly embrace the technology, ASU Chief Information Officer Lev Gonick told University World News.
Regarding welfare, many students have said they prefer the anonymity of dealing with a bot to queuing up in support offices – especially when their problems have to do with health or well-being. Humans step in to help when needed.
Last year was the explosive debut of ChatGPT, which OpenAI said had been adopted by teams in 80% of Fortune 500 companies within nine months. It has been vigorously exercising minds at universities across the world.
This year, higher education will further explore the potential of the technology and the focus is likely to be on implementation, leveraging the power of generative AI to improve life and learning for students, and to boost research.
“The challenge will be to address the 800-pound elephant in the room,” said Gonick, which is the quality of information – such as ‘hallucinations’, when ChatGPT generates incorrect information as fact – and concerns around privacy and data protection.
The university is looking forward to the higher security and privacy of ChatGPT Enterprise and will raise the quality of information used; already, ASU and other institutions have developed multiple in-house AI language models.
Science fiction to science fact
This is all a long way from the initial responses of some universities around the world to the advent of ChatGPT in November 2022, which was to banish it from campus. Today, the importance of AI is recognised by universities everywhere, though not all are engaging with it.
University World News is moderating a panel at the 2024 ABET Symposium, which is titled “Science Fiction to Science Fact: The impact of AI on higher education” and will be held in Tampa, Florida, in the United States on 4-5 April. ABET is a global non-profit quality assurance agency in the science, technology, engineering and mathematics fields, and one of the partners of University World News.
To inform this and other discussions, it is useful to look at current aspects of AI in higher education – such as personalised tutoring and ethical issues around AI in learning – drawing on experts from the European University Association, City University of New York and ASU.
Arizona State University, based in Phoenix, has four campuses and more than 73,000 undergraduate and graduate students from across the US and some 120 countries. Importantly, for nine years in a row it has been ranked America’s ‘most innovative’ university by the US News & World Report.
AI is identified as a high priority all the time among the more than 800 universities and rectors’ conferences in 48 countries represented by the European University Association (EUA), said Dr Thomas Jorgensen, its director for policy coordination and foresight.
“It comes up because everybody knows that something is happening, but nobody knows exactly what is happening. This is not an area where you sit in Brussels and the direction of travel is absolutely clear,” said Jorgensen.
The EUA is setting up an AI working group, which will begin in March. “We need to facilitate a discussion about what the real issues are. What do we know, what do we need to know? It’s been a year of experimentation, really. We can begin to share the outcomes of that experimentation, maybe share a bit about the methods of that experimentation,” said Jorgensen.
The EUA has considerable convening power in European higher education. It contributes to policy-making in the European Union and is key in raising issues that are affecting universities. The association identified new digital technologies as a potent new development back in 2018, and in its 2021 Pathways to the Future report, where it focuses on labour markets.
Some AI in higher education imperatives
Matthew K Gold is an associate professor of English and digital humanities at the Graduate Center at City University of New York. He writes about digital pedagogy, new ways to teach online with technology, and the impacts of technology on the academy.
Gold pointed out that groundwork for generative AI has been laid in higher education over the past decade: “There’s been an increasing openness, especially among faculty who don’t have technical skills, to begin thinking through how to productively incorporate technology into their work. Certainly at an earlier stage, there was a lot of fear and distrust of technology.”
Within the education technology space, there is a schism between for-profit corporations that create proprietary platforms – “which often speak to education at scale”, for instance learning management systems – and academics in a space around open education that focuses on student expression (which encourages students to produce and publish) and student learning.
“The COVID pandemic threw everything upside down, because suddenly everyone, no matter their comfort with technology, had to at least teach via Zoom or use email or learning management systems,” Gold told University World News. “Many corporations within the ed tech space saw an opportunity to grow their operations, a lot of them very profitably.”
“Now we’re moving from the pandemic into a hybrid-ish environment where many classes are still in person, but many are also now online, or a mix of in person and online. We’re coming back to where we were before the pandemic, but with a lot more people who have experience teaching with technology,” Gold explained.
Last year, generative AI brought up issues around authenticity and plagiarism, originality and cheating. There are some other things to be concerned about, said Gold. For instance, incorporation of AI based tools into proprietary platforms that are providing a range of services to universities. For instance, rights and privacy issues around the handling of student papers. “As with anything, there are things to be wary of and there are benefits.”
Indeed, there have been related and rich debates around AI and student assessment. For Jorgensen, an optimal approach for universities is to have multiple strategies – from policing student uses of AI, to using other AI to check for unacceptable uses of AI, to changing examination methods. Some have argued that AI could encourage universities towards more authentic forms of assessment and more focus on learning outcomes.
Of course, generative AI is not new to some disciplines, Jorgensen told University World News. It has been used in law for a long time, in a very practical way. “But when you begin to play around with AI, you need to learn how to communicate with it. It doesn’t do what you want straight away.” Many universities have responded with, for example, training courses in ‘prompt engineering’ – structuring text so that it can be understood by generative AI.
Universities also have to contend with AI developments outside the higher education sector but related to it, such as the labour market and the future of work. AI technologies driven by big data are triggering what UNESCO and others label the Fourth Industrial Revolution.
Universities have long needed to train students for jobs that do not yet exist, but an AI-driven transformation of the world of work looks set to happen quickly and extensively.
There are interesting discussions to be had, including about reskilling people for changing jobs, said Jorgensen, who does not envisage mass graduate unemployment because of technology. “It might be that the workplace is going to change and we’re going to do more. AI is going to be an efficiency tool,” he said.
A focus on students
For education systems, staff and students, the COVID pandemic highlighted important questions, such as the impacts on students of solitary digital learning versus face-to-face learning – for instance, mental health issues among students increased, in some cases dramatically.
At Arizona State University, the focus is on students and an important part of new work using AI is in the area of educational techniques, supporting students with a personalised AI tutor and student avatars to provide learning assistance, among other things.
Under the new partnership, faculty and staff have been invited to submit proposals for innovative uses of ChatGPT Enterprise. This process opens on 1 February 2024, but already emails have poured in offering ideas, said Gonick. The goal is to “leverage the university’s knowledge core to develop AI-driven projects that look likely to revolutionise educational techniques, research and administrative efficiency”, he said earlier this month.
Gold is sceptical of the potential of AI to support student learning, and to help identify students who are at risk and need a supportive intervention. “We need to know more about how the students themselves perceive AI systems that they interact with and the effects that has on both their wellbeing and their learning.
“For instance, how does the learning in an online class with lots of AI assistance compare to an in person experience? I worry that universities will turn to AI-based advising systems in place of properly staffed and funded academic advice offices,” Gold told University World News.
“I believe strongly in the value of synchronous education. Educational experiences that involve, say, a class meeting in person or over zoom, but synchronously exchanging with each other. What worries me are models of online education that are largely asynchronous, that promise scale to universities. It’s also a little harder to know how well students are doing in asynchronous courses, both in the sense of learning and general wellbeing, even though such courses create more data for universities to analyse.”
Gold stressed that the usefulness of technology has to do with how technology is implemented, and it must foreground student wellbeing. “For instance, in my classes, I have students do a lot of online writing and blogging and publishing.
“A positive aspect of that is it displaces the faculty member as the sole source of authority in the classroom and enables students to write for more public audiences. And it enables them to think of themselves differently, not just as learners, but also as people who have important knowledge, thoughts and experiences that they can share with the world,” Gold said.
Far from making life easier, to work well in the classroom, often technology has to be labour intensive for both the student and academic, Gold continued. “Both have to be invested in the use and evaluation of technology, and should approach technology from a critical perspective, asking not just about using the technology but questioning what it is, how it works and what happens to student data that is created through it.
“For instance, what possibilities exist for students to opt out of such systems and protect their data? What transparency is there for students to know how their data is being handled by universities and third-party companies they partner with?”
Ethics and regulation
The ethics of AI in higher education was another major area of discussion last year. ASU did not only start engaging with OpenAI because it has a 65% of the generative AI market among its peers, but because of shared values, Gonick told University World News.
Both partners value inclusion of all in the benefits of technology. “Many of OpenAI’s people are graduates of excellent, exclusive private Ivy League universities,” he said. But the company chose as its first university partner a big state institution that works to include, not exclude.
Europe has been quick in responding to ethical implications of AI in education. In October 2022 – a month before ChatGPT was launched – the European Union published “Ethical guidelines on the use of artificial intelligence (AI) and data in teaching and learning for educators”.
The guidelines, the EU said, are “designed to help educators understand the potential that the applications of AI and data usage can have in education and to raise awareness of the possible risks so that they are able to engage positively, critically and ethically with AI systems and exploit their full potential”.
Thomas Jorgensen is in favour of regulation around AI. “I would not like to see a completely unregulated, AI driven ed tech out there”. It is not just a technical discussion, he said, but about policies that give a voice to people and groups in need of inclusion.
The European Union’s recent AI Act moves along this track. It acknowledges an important role for AI in education, but is concerned about other issues such as equity and rights; for instance, that data sets used do not reproduce bias. These are major concerns, Jorgensen stressed: “You need to guarantee to your students, particularly minority students, that you’re not being guided as a minority student, you’re not being labelled because of your religious background, or as somebody who might have a problem and should take another course.”
There are also security concerns, intersecting with the equity and sharing imperative that drives the open source movement in higher education. For instance, in France there is ‘general purpose’ AI that is open source, and that anybody can access, and tamper with or misuse.
There are challenges such as this one on the horizon, Jorgensen said. The AI act contains “generous exceptions for research purposes. Things that are absolutely forbidden to do – such as emotion recognition or subliminal manipulation systems – can be used for research.
Research into areas such as biohazards or viruses requires good security systems so that they do not fall into the wrong hands. Technology can be abused, and dangerously so. “I expect people who research viruses to be in suits and clean rooms with locked gates so things don’t get out. I expect the same for researchers in the nasty parts of AI,” said Jorgensen.
How disruptive might AI be?
The uses and implications of AI will continue to evolve, as technology will continue to evolve.
More and more universities have technology-based services for students, such as user-friendly AI-enhanced student engagement platforms that provide services on demand and around the clock, liberating the time of support and administrative staff to assist students with problems.
It is likely that AI activity at Arizona State and other universities will take AI integration to a new level this year. In Europe, a normal sized university has 30,000 to 50,000 students, and many have administrative staff shortages, opening the door for AI to provide improvements. Many US publicly-funded universities face the same lack of financial or human support.
But there will be restrictions on uses of AI in Europe, as Jorgensen pointed out. “The European Union AI Act defines education as a high risk area.
For instance, a chatbot providing guidance might lead to decisions that have a major impact on people’s lives. That’s a high risk. This doesn’t mean you can’t do it, but it requires clarity about what data sets the chatbot has been trained on, and human oversight.”
Ed tech discourse suggests that fully automated student guides will be better than the support currently available. But that has yet to be proved, Jorgensen said. He used web advertising as an example: “Algorithms figure that if you want this, you probably want that. But we’re still at the point that if I buy a washing machine, Facebook will think I immediately need to buy another one. I’m not sure of the efficiency of that,” he said.
Jorgensen believes the real generative AI revolution will be in research. “If we don’t do it, somebody will. The potential is so big that already the big tech companies are major players, letting their algorithms loose on material science and chemistry. IBM does it, Google does it,” he said.
The disruptive potential of AI in teaching and learning needs more evidence on exactly what problems AI might solve, he added. Some of the concerns around AI in student learning raise philosophical questions about what learning is and what is taught, Jorgensen said.
“Are we teaching students to put words together in a certain sequence, with facts in it, in a certain style? This has been seen as not an automatic process, but are people programmed to reproduce information in a similar way to a smart machine? That’s disruptive at a philosophical level for education.”
Jorgensen suggested reading science fiction – classic robot novels by authors such as Isaac Asimov and his robot series – to explore questions around how the mind works, human behaviour and the brain’s robotic replication.
Science fiction, he reasoned, might help us to understand today’s AI science fact.
0 Comments