Impacto de la IA en la experiencia estudiantil
Septiembre 12, 2023

How will AI alter the student experience? Experts weigh in

Artificial intelligence and its implications for higher education are a hot topic right now. While global experts in higher education research and scholarship all agree that we need to address the impacts of AI technology on higher education pedagogy and the student experience, recognising the magnitude of the challenge is where the agreement ends.

Across faculty, staff and students, the felt impacts of AI have been dramatically different and the understandings of just how big the effects will be are varied. Plus, pinpointing which impacts will have the biggest difference on the student (not to mention faculty and staff) experience is challenging to do, particularly as the technology and its uses evolve.

What some leaders may consider a passing fad, or a work process improvement is, for other leaders, an existential challenge to the work of higher education.

Given the diverse perspectives on the impacts that AI may have upon the student experience, continued discussion, conversation and education about AI technology is critical for higher education leaders to cultivate.

At the 2023 Student Experience in the Research University (SERU) Consortium Symposium held at the University of California, Berkeley, consortium leaders Igor Chirikov and John Douglass convened a keynote panel about artificial intelligence and the future of the student experience in the research institution, which I chaired.

The expert panel participants were: Scott Adler, dean of the graduate school and professor of political science at the University of Colorado, Boulder; Camille Crittenden, executive director of the Center for Information Technology Research in the Interest of Society at the University of California; Andreas Breiter, professor and chief digital officer at the University of Bremen in Germany; and Zach Pardos, associate professor of education at UC Berkeley.

We explored several core questions for higher education leaders to consider about AI. From the changing nature of higher education jobs to the need for academic disciplines to reconsider core practices and pedagogies, we identified just how critical it is for higher education leaders to start defining what ethical ways of thinking critically and engaging with AI could look like.

Defining AI

So, what are we even talking about when we discuss AI in higher education? For the most part, pedagogical conversations about AI have largely been focused on the use of ChatGPT and other similar large language model technologies.

ChatGPT responds to prompts from human users and generates significant outputs of written text in response to those prompts. Higher education faculty and staff have been particularly alarmed by this development because of the increased ease and sophistication of tools like ChatGPT for generating text.

Yet other forms of AI may also have an impact on the student experience, such as image and video generating tools that may also support content creation across the disciplines. Furthermore, AI that can be used to streamline workflows and processes and help administrators create predictions across large data sets may also impact upon the future of admissions.

This wide range of uses and applications must be understood in order for us to have deeper conversations about where some of the tension in conversations about AI’s impact lies: those who may speak to its outsized (or underrated) impact may only be looking at a narrow sliver of AI’s applications. So, the more holistically we can conceive and recognise what AI’s capacities are, the better we, as higher education leaders, can strategise and appreciate its uses.

How AI may change higher education jobs

Getting more precise when we talk about AI may help higher education leaders confront a reality: that faculty and staff may have to be prepared for their jobs to change.

Adler proposed that mundane tasks will likely become automated, which will mean having to re-think administrative job classifications, duties and responsibilities. Crittenden similarly pointed out that jobs in admissions and retention could look quite different than they do now.

For example, AI technology could help admissions officers vet applications by clustering applications into particular categories based on the admissions criteria that officers use. By automating the sorting of admissions applications, officers would be left with making final decisions based on the AI’s categorisation process.

The student recruitment process also appears poised to change: institutions worldwide are already using AI chatbots to answer student questions about experiences of attending their universities, and AI tutors, chatbots and advisors are also guiding students through frequently asked questions about accessing courses, selecting majors and seeking help on campus.

As an example, Pardos is already exploring the impacts of using AI for tutoring undergraduate students in STEM disciplines, building off prior research with AI chatbots and tutors.

In his research, Pardos predicts that students will become so skilled at seeking basic answers to their questions via AI chatbots and tools such as ChatGPT, that they will be looking to their instructors and human tutors to provide deeper understanding of course content since mere knowledge will so easily be at their fingertips. The work of teaching – and how teaching time will be spent – will likely fundamentally change as a result.

Rather than seeing AI simply as an existential threat, naming the possible use cases is essential for illuminating more specifically what is possible with AI and what is not possible. This means encouraging people in faculty and staff positions to be willing to experiment with AI applications as appropriate and be curious about the possibilities.

Some leaders may find some potential they didn’t realise while others may remain sceptical. As leaders, it is worth inviting this range of perspectives so as to have a holistic and meaningful approach to engaging with AI.

Academic disciplines need to be responsive

Faculty across the disciplines need to be responsive to the changes AI will bring, to the understanding of how work is done in their fields and also to how students can be engaged in thinking critically about AI.

Even if the current hype may be outsized compared to AI’s impact, students, faculty and staff need to be having active and ongoing conversations about what it means to interrogate and explore AI technology in their learning and working lives.

Breiter pointed out that critical thinking about AI should be a core part of students’ experiences so that students are prepared for the multimodal learning experiences they will continue to encounter beyond the higher education environment.

Part of being responsive to AI’s impact on academic disciplines will be, in Breiter’s estimation, to engage in creating a campus-wide code of good conduct about how to use AI in ways that help faculty evolve their assessment strategies.

Adler similarly made the case that disciplines as varied as computer science, political science, law, physics and history will need to consider how the knowledge created from AI will impact upon both how scholars conduct research and how students engage with that research.

To take both Breiter’s and Adler’s suggestions would mean approaching critical digital literacy from multiple angles: from thinking about how usage of AI can be part of every curricular experience to how AI may change specialised disciplinary knowledge.

In other words, academics will need to consider how using (or resisting) AI could be a part of the university’s obligation to engage students in understanding the literacies they need to be successful, within and beyond college.

Developing shared digital literacy standards for the campus community may be one way to engage in the diversified ways that AI necessitates. That would mean having campus leaders with expertise in AI and data literacy provide foundational knowledge about how AI works, and creating benchmarks for assessing the campus community’s understanding of what AI is, so that conversations are driven by an understanding of the technology and its true capacities.

Ethical considerations are core

Perhaps the most important component of engaging in conversations about AI in higher education is in thinking through what the consequences of AI adoption may be for advancing (or deterring) the ethical mission of the institution to make sure all faculty, staff and students feel included in their experience.

AI opens up numerous concerns, from considerations about academic dishonesty to concerns about the lack of diversity often reflected in AI datasets. Because universities themselves are not the ones engineering the AI solutions, higher education leaders need to consider how they will work with and have conversations with AI vendors to ensure that AI solutions are not going to pose risks to student privacy and the student experience.

Pardos pointed out that, generally speaking, vendors are not up to speed about what it means to use AI for authentic learning. He encourages creating a checklist of considerations for vendors to know about before entering into agreements with campuses. Crittenden similarly encourages campus chief information and chief technology officers to know what to expect before signing contracts with AI vendors, encouraging leaders to develop campus standards before entering agreements.

There is great potential for AI to support students from diverse backgrounds with tasks like AI language translation and interpretation, but there is also great risk that AI trained on incomplete data sets may produce, at best, inaccurate translations and, at worst, potentially offensive outputs.

AI models, in other words, typically have biases baked into their datasets, and educators and campus leaders have a responsibility to recognise those biases and consider how biases can be mitigated. This means that higher education leaders cannot uncritically adopt AI or simply encourage experimentation without being transparent about the potential risks involved.

Students, faculty and staff alike should be fully agential in determining how, where, when and why they may want to use AI across their institutional contexts.

The future is here

AI technology is here to stay, and it is worth higher education leaders’ time to think now about how they can engage with it to protect student privacy, enhance the student experience, and help all stakeholders on campus be prepared for a future of working with powerful technology.

When it comes to supporting student learning, nurturing the higher education community’s curiosity (and sometimes dread!) about emerging technologies can help all of us make more thoughtful and careful decisions about where, when and how to use AI as part of the educational experience.

It is equally critical to give students, faculty and staff the information they need to make good decisions about their working futures so that they feel empowered and equipped to make good choices about how, when and whether engaging with AI may be the right choice for them right now.

Above all, the hype around AI may create a great amount of urgency to start coming up with technology-based solutions for institutional and infrastructural problems. This is the wrong impulse. Instead, taking the time to understand the capacities of these tools and developing a strategy to help the campus community equally understand how these tools work will be critical to fostering trust and developing a deeper and more meaningful relationship to technology in higher education contexts.

Jenae Cohn is the executive director of the Center for Teaching and Learning at UC Berkeley. The Student Experience in the Research University (SERU) Consortium, based at UC Berkeley and the University of Minnesota, is a community of research-intensive universities collaborating on administering student surveys and sharing data for institutional self-improvement.

0 Comments

Submit a Comment

Tu dirección de correo electrónico no será publicada. Los campos requeridos están marcados *

PUBLICACIONES

Libros

Capítulos de libros

Artículos académicos

Columnas de opinión

Comentarios críticos

Entrevistas

Presentaciones y cursos

Actividades

Documentos de interés

Google académico

DESTACADOS DE PORTADA

Artículos relacionados

Qué piensa (o no) la derecha

Qué piensa (o no) la derecha "¿Qué políticas positivas ofrecen las derechas? ". José Joaquín Brunner,Viernes 26 de abril de 2024 El sector de derechas alimenta en Chile expectativas de volver al gobierno en 2026. Las dos veces anteriores en que, desde el retorno de la...

Share This