IA, ¿cómo afectará a la educación superior? Especial del Chronicle of Higher Education
Junio 4, 2023

How Will Artificial Intelligence Change Higher Ed?

ChatGPT is just the beginning. 12 scholars and administrators explain.

 

THE REVIEW | FORUM, MAY 25, 2023

When ChatGPT made its public debut last year, the CEO of OpenAI, ChatGPT’s parent company, predicted that its significance “will eclipse the agricultural revolution, the industrial revolution, the Internet revolution all put together.” Even discounting for hyperbole, the release of ChatGPT suggests that we’re at the dawn of an era marked by rapid advances in artificial intelligence, with far-reaching consequences for nearly every facet of society, including higher education. From admissions to assessment, academic integrity to scholarly research, university operations to disappearing jobs, here’s how 12 professors, administrators, and writers answer the question: How will AI change higher education?

— The Editors

Bryan Caplan | Jeffrey J. Selingo | Philip Kitcher | Hollis Robbins | Ted Underwood | G. Gabrielle Starr | Lee Vinsel | Mung Chiang | Rick Clark | Leon Botstein | Darryll J. Pines | danah boyd

BACK TO TOP

Revealing the Farce of Higher Ed

Imagine an AI bot endlessly churning out turgid prose on ‘neoliberalism.’

BY BRYAN CAPLAN

I’ve spent much of my career arguing that the main function of higher education is not to teach useful skills (or even useless skills), but to certify students’ employability. By and large, the reason our customers are on campus is to credibly show, or “signal,” their intelligence, work ethic, and conformity. While we like to picture education as job training, it is largely a passport to the real training that happens on the job.

How will AI change our system? To start, we should bet against any major disruption. Students will not abandon traditional colleges en masse for virtual AI training centers. And since most majors are already barely job-related, students will generally stick with their existing majors.

What will change for students is workload and evaluation. Professors are used to assigning papers and projects to be done outside of class. Using AI to cheat on such work will soon be child’s play. Harsh punishments for cheating might preserve the status quo, but colleges generally give cheaters a slap on the wrist, and that won’t change. Unmonitored academic work will become optional, or a farce. The only thing that will really matter will be exams. And unless the exams are in-person, they’ll be a farce, too. I’m known for giving difficult tests, yet GPT-4 already gets A’s on them.

AI will also change the labor market itself. As a general rule, new technologies take decades to realize their full potential. The first true trans-Atlantic phone call wasn’t placed until 1956, almost 80 years after the invention of the phone. E-commerce was technically feasible in the mid-’90s, but local malls still endure. We should expect AI to slowly disrupt a wide range of industries, including computer science itself. This will, in turn, slowly alter students’ career aspirations.

But as an economist, I scoff at the idea that AI will permanently disemploy the population. Human wants are unlimited, and human skills are flexible. Agriculture was humanity’s dominant industry for millennia; when technology virtually wiped out agricultural jobs, we still found plenty of new work to do instead. I do not, however, scoff at the idea that AI could drastically reduce employment in computer science over the next two decades. Humans will specialize in whatever AI does worst.

What about us humble professors? Those of us with tenure have nothing to worry about. Taxpayers and donors will keep funding us no matter how useless we become. If you don’t have tenure, students will keep coming and your job will go on — unless you’re at a mediocre private college with a small endowment.

Except for cutting-edge empiricists, however, AI will make research even more of a scam than it already is. Consider the ease with which the Sokal Hoax or the “grievance studies” hoax were perpetrated. AI will be able to do the same on an industrial scale. If you want a picture of the academic future, imagine a bot writing a thousand turgid pages a second about “neoliberalism,” “patriarchy,” and “anti-racism” — forever. Outside of the most empirical subjects, the determinants of academic status will be uniquely human — networking and sheer charisma — making it a great time to reread Dale Carnegie’s How to Win Friends and Influence People.

Bryan Caplan is a professor of economics at George Mason University.

BACK TO TOP

AI Will Help Control Costs

Admissions and student services will be affected first.

BY JEFFREY J. SELINGO

For decades, higher education’s solution to burgeoning enrollment and increased demand for student services stayed the same: hire more people. As Frank Yeary, a former vice chancellor at the University of California at Berkeley, told me in 2012, “We threw people at problems, rather than technology.”

But now, with enrollment declines exacerbating financial worries and more than half of college staff members considering leaving their jobs in the next year, colleges need to learn how to rely on technology. While much of the discussion has focused on what generative AI means for teaching, learning, and research, its immediate impact will likely be felt on functions outside of the academic core.

Let’s start where turnover rates among staff are the worst: admissions. Most colleges accept most students who apply using a selection process that is routine and predictable. AI could be trained to make decisions about who gets accepted — or at least make the first cut of applicants. Yes, colleges will still need humans for recruiting, but even there, AI is increasingly capable of finding and marketing to prospective students. At selective institutions, which have seen their application totals skyrocket in recent years, AI can at the very least review the quantitative elements of admissions, such as high-school courses and grades, and reduce the number of files that need a human touch.

Colleges have already started to deploy AI-powered chatbots to answer students’ everyday questions and help them show up for classes. Saint Louis University, for instance, added smart devices to dorm rooms that have been programmed to answer more than 600 questions from “What time does the library close tonight?” to “Where is the registrar’s office?”

The next iteration of these chatbots is to personalize them to answer questions that are specific to a student (“When is my history exam?”) and bring them into the classroom. When Georgia State University added a chatbot to an intro government course to nudge students on studying and assignments, researchers found significant improvements in student performance. In particular, first-generation students who got reminders took the recommended actions, such as completing assignments, at higher rates.

With campuses now awash in data about their students and operations, AI can be used to tackle administrative functions from financial aid to the registrar’s office. At Arizona State University, AI is rewriting course descriptions to make them more informative for prospective students and improve search performance on the web.

So far, however, most of the uses of AI on the administrative side are about making operations more efficient or improving the student experience, not reducing the work force. Officials at companies that provide AI services to higher education tell me that colleges are sometimes reluctant to buy the products because they don’t want them to be seen as replacing people. But until campuses use AI in that way — to take over for people in jobs that involve processing information or doing repeatable tasks — then we won’t reverse or slow down the upward-cost trajectory of higher education, where most tuition dollars are now spent on functions outside of the classroom.

Jeffrey J. Selingo, an author and former editor of The Chronicle of Higher Education, is a professor of practice at Arizona State University.

BACK TO TOP

Liberating Labor

AI and the expansion of meaningful work.

BY PHILIP KITCHER

Most of today’s discussions about the impact of AI on education focus on perceived problems in teaching and evaluating students. Yet there is a far larger effect of AI’s advance: Many types of work will be taken over by machines, and jobs will vanish.

This change is typically seen as a cause for gloom. I suggest we see it as an opportunity to revitalize education by replacing unsatisfying work with meaningful labor.

Education aims to help young people develop as individuals and as citizens, not simply to become cogs in the machinery of national production. If the cogs can be made by ingenious AI systems, so much the better. The previously employed people can be liberated to do something else. Not necessarily to expand productivity, but to advance educational goals that we neglect.

Let’s do that by vastly increasing the number of people who educate the young and raising the pay of those who undertake these important tasks — the “main enterprise of the world,” as Emerson once characterized it. Let’s remove the stigma from service work and appreciate its enormous worth. Let’s train more adults and put more of them in the classroom. Let’s give them the opportunity to recognize each child’s individuality, to help young people understand themselves, overcome the difficulties they encounter, and work with one another.

During the past few decades, education in the affluent world has been largely dominated by a crude economic imperative. But children are not vehicles to be sent into some global demolition derby. If students are to have fulfilled lives, in which they can take on civic responsibilities and build healthy communities, they need far more than a collection of technical skills based on contemporary understandings of what ventures may equip a nation with some competitive edge. Giving teachers the respect and social standing they deserve, and recruiting a larger group of adults to join them and assist them in the classroom, would advance the goals serious thinkers about education, from Plato to the present, have always recognized.

AI’s displacement of labor can be an opportunity for liberation.

Philip Kitcher is a professor emeritus of philosophy at Columbia University.

BACK TO TOP

17 Notes on Academic AI

The ground is shifting under our feet.

BY HOLLIS ROBBINS
  1. AI knows more than any one person knows, but every person knows things that AI does not know.
  2. A large university may know more than AI knows, but the knowledge is fragmented and distributed.
  3. The universe of information includes important, useful information and seemingly unimportant information; it is hard to know what might become important someday. It is good to have scholars focused on obscure narrow topics.
  1. Education is still a matter of teaching people how to access information and how to turn information into knowledge.
  2. The professional distinction between teachers (who transfer information) and scholars (focused on knowledge production) will become more stark.
  3. Knowledge production is upstream from information transfer. Most interactions with AI-chat models occur downstream. Or, if you think of knowledge as a pyramid, most AI chat is at the very base level.
  4. Methods of organizing and systematizing information are becoming more important. Catalogs, canons, and curated lists will become more valuable.
  5. The textbook industry should be worried.
  6. Scholars are best situated to know what is not yet known, to identify “blank spaces” in the universe of knowledge.
  7. Higher education will be less about ensuring students know what they’ve read and more about ensuring they read what is not yet known by AI.
  8. The written essay will no longer be the default for student assessment.
  9. At the time of this writing, AI writing is technically proficient but culturally evacuated.
  10. Until culturally inflected AI is developed, models such as ChatGPT will stand apart from culture. Knowledge production within culture will not fully be absorbed by AI.
  11. Specific and local cultural knowledge will become more valuable.
  12. Experiential learning will become the norm. Everyone will need an internship. Employers will want assurances that a new graduate can follow directions, complete tasks, demonstrate judgment.
  13. Programs such as Hallie Pope’s Graphic Advocacy Project will argue for new communication tools and modalities.
  14. Years ago, I assigned Hélène Cixous’s feminist classic “The Laugh of the Medusa” (1975) and a student came to class saying, “I can’t write a response essay. Instead, I am going to give you a hug.” And she did. Assessment may take new and unexpected forms

Hollis Robbins is dean of humanities at the University of Utah.

BACK TO TOP

‘Calculators for Words’

Approached cautiously, AI can help us teach and learn.

BY TED UNDERWOOD

Our understanding of AI is strongly shaped by the recent success of ChatGPT and other chatbots. They mold a language model (which in principle could imitate any number of genres) into the familiar form of a question-answering utility like Siri or Google Search: ask a short question and get a short answer. An all-purpose question answerer is certainly easy to use. But professors want students to acquire habits of skeptical dialogic inquiry, not a habit of relying on an oracle.

In an academic context, we should approach language models as engines for provisional reasoning — “calculators for words,” as British programmer Simon Willison calls them. Instead of assuming that the model already has an answer to every question in memory, this approach provides, in the prompt, any special assumptions or background knowledge the model will be expected to use. A student might hand in a model of a scientific article and ask for explication of a difficult passage. A researcher might provide a thousand documents, serially, and ask the model to answer the same questions about each one. A writer might provide a draft essay and ask for advice about structure.

Used in this epistemically cautious way, AI has a valid, important role to play in education. But if a cautious approach to AI is advisable, I don’t mean to imply that it is easy to achieve. Students will be tempted instead to rely on models as oracles. And it will take work to define an appropriate level of caution. What counts as an assumption that needs to be offered provisionally in a prompt — or as linguistic common sense that can be hard-coded in a model? The question is contestable.

In fact, we will need to teach students to contest it. Students in every major will need to know how to challenge or defend the appropriateness of a given model for a given question. To teach them how to do that, we don’t need to hastily construct a new field called “critical AI studies.” The intellectual resources students need are already present in the history and philosophy of science courses, along with the disciplines of statistics and machine learning themselves, which are deeply self-conscious about their own epistemic procedures. Readings from all of those fields belong in a 21st-century core curriculum.

Ted Underwood is a professor of information science and English at the University of Illinois at Urbana-Champaign.

BACK TO TOP

AI Can Enhance the Pleasures of Learning

It will contribute to the deeply human parts of us.

BY G. GABRIELLE STARR

AI is part of a larger phenomenon that has shaped knowledge production around the world — an enormous democratization of what is known as well as what is knowable. Indeed, almost all of the macro-trends shaping higher education come down to the increasing availability of information. At times, it seems, people want to credit the internet for this revolution. However, it’s not Wikipedia that has made the biggest change to the availability of information, or what counts as knowledge.

Look at our own institutions.

Since the ‘60s, colleges have been home to new fields of discovery that have not only expanded the landscape of knowledge, but have helped expand the collective sense of what it means to be human.

Women’s, gender, Africana, Asian American, Chicano/a and Latino/a studies are a few examples. We also have fields that have stretched the limit of how we know (computer science, neuroscience, or cognitive science), what we know (materials science or string theory), and how we use it (environmental engineering, data science, and nanotechnology), just to name a few.

It is commonplace to lament that colleges are great at adding new things but not so great at ending them. But that isn’t something to lament. We are here to add to knowledge, not erase it or pretend it never happened.

This means that the frontiers of knowledge aren’t here to be shunned by higher education. They are here to be pressed. If AI is an outgrowth of the human drive to know, which it is, it can emerge as a positive contributor not just to what humans know, but how.

My conviction here emerges in part from my own research in the neuroscience of aesthetics, which has led me to believe that the probabilistic nature of human learning — the very principle that undergirds modern AI — is intricately connected to something that seems to hit at the core of who we are as humans: our ability to experience beauty.

Simply put, beauty emerges with certain kinds of learning because we can experience pleasure when the predictions we make about the world are verified — when we predict what happens next, or what goes together, and we are right. We also experience pleasure when we learn something new, especially when what we learn provides insight into something that had previously been ambiguous, uncertain, or confusing. More than this, aesthetic pleasure can enhance learning by orienting us toward objects and experiences that may afford the biggest learning gains.

Nothing ChatGPT does will take away the pleasures of learning. More profoundly, it seems clear that the active part of learning is immensely important. Learning requires more than synthesis of information, for it is in the testing of knowledge that we make the biggest gains in understanding. This means that it is how we put our knowledge to work that matters. It also means that learning is a social undertaking, in which we discuss, dispute, verify, reject, modify, and extend what we (think we) know to other people and the world around us. These are fundamentally human endeavors.

What differentiates humans from AI, in part, comes down to that: The pleasures of learning lead us toward creative possibilities, as well as toward active experimentation. For now, humans do that, and they do it pretty well. And I would advise that humans limit AI action — the ability to directly influence the world around us by altering or manipulating it – because it is in action that what humans value most really lies. It is in not just what we do, but in the effects of what we do in the world. It is in ethics that humanity finds itself and determines its own meaning.

The hope for humanity remains where it always was: with us and us alone.

G. Gabrielle Starr is president of Pomona College.

BACK TO TOP

Don’t Believe the Hype

Previous tech bubbles offer lessons for AI.

BY LEE VINSEL

The announcement of ChatGPT and subsequent wave of emotional energy, both positive and negative, have triggered a powerful case of déjà vu among scholars who study the socioeconomic effects of new technologies. After all, it was less than a decade ago that books like Erik Brynjolfsson and Andrew McAfee’s The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies (2014) and Martin Ford’s Rise of the Robots: Technology and the Threat of a Jobless Future (2015) were predicting rapidly increasing productivity as a result of artificial intelligence and robotics. Those predictions came to naught. Productivity has remained low throughout the last decade, and in some quarters in the United States was even negative. Historians of computing will tell you about similar bubbles around AI in the 1960s and 1980s, which, when the promises of the technologies failed to arrive, were followed by “AI winters,” periods when funding for research into artificial intelligence all but dried up.

Yet we still see hysterically positive and negative appraisals of ChatGPT. Sam Altman, chief executive of ChatGPT’s maker, OpenAI, told reporters that AI would “eclipse the agricultural revolution, the industrial revolution, the internet revolution all put together.” Meanwhile, the Future of Life Institute published an open letter calling for a six-month ban on training AI systems more powerful than GPT-4 because they would threaten humanity. At the time of this writing, nearly 30,000 people have signed the letter.

Why, after so many technology bubbles that have ended in pops and deflations, do these same histrionic scripts continue to play out? Surely, many factors play a role in creating bubbles, including weakly reported media hype and irresponsible social-media influencers. But one factor that colleges and universities have some control over is that, to date, we simply do not teach how to think about the social and economic impacts of new technologies.

Such an education would teach students about how technology bubbles are rooted in the power of narratives about how new systems will develop and influence society, as the business-school professors Brent Goldfarb and David A. Kirsch discuss in their 2019 book, Bubbles and Crashes: The Boom and Bust of Technological Innovation. The first stories to emerge around new technologies typically offer hyperbolic assessments of powers, promises, and threats. And they are almost always wrong.

Lee Vinsel is a professor of science, technology, and society at Virginia Tech.

BACK TO TOP

AI Poses Hard Questions About Individual Rights

We’ll need to plan very carefully.

BY MUNG CHIANG

Let’s assume that AI will transform every industry and everyone — changing how we live, shaping what we believe in, displacing jobs. And disrupting education.

Well, after IBM’s Deep Blue beat the world champion, we still play chess. After calculators, children are still taught how to add. Human beings learn and do things not just to survive, but also for fun, or as a training of our mind.

That doesn’t mean we don’t adapt. Once calculators became prevalent, elementary schools pivoted to translating real-world problems into math formulations rather than training for arithmetic speed. Once online search became widely available, colleges taught students how to properly cite online sources.

Some have explored banning AI in education. That would be hard to enforce; it’s also unhealthy, as students will need to function in an AI-infused workplace upon graduation.

Pausing AI research would be even less practical, not least because AI is not a well-defined, clearly demarcated area. Colleges and companies around the world would have to stop any research that involves math. One of my Ph.D. advisers, Professor Tom Cover, did groundbreaking work in the 1960s on neural networks and statistics, not realizing they would later become useful in what others call AI.

As it stands today, AI is good at following rules, not breaking rules; reinforcing patterns, not creating patterns; mimicking what’s given, not imagining beyond their combinations. Even individualization algorithms, ironically, work by first grouping many individuals into a small number of “similarity classes.”

We need to skeptically scrutinize the dependence of AI engines’ output on their input. Data tends to feed on itself, and machines often give people what we want to see.

We need to preserve dissent even when it’s inconvenient, and avoid philosopher-kings dressed in AI.

We need entrepreneurs to invent competing AI systems and maximize choices outside the big-tech oligopoly. Some of them will invent ways to break big data.

Like many technologies, AI is born neutral but can be abused, especially in the name of the “collective good.” The gravest risk of AI is its abuse by authoritarian regimes. My worst fear is that AI will shrink individual freedom. Our best hope for AI is that it advances individual freedom. To ensure that it does, we need verifiable principles. We need to preserve the capacity to opt out, to the maximum degree possible. We need to ensure transparency in how we deploy AI. And we need to ensure that individuals have the ability to litigate when their rights are violated in an independent judicial system under the rule of law.

Let us preserve individuals’ rights.

Let our students sharpen the ability to doubt, debate, and dissent.

Mung Chiang is president of Purdue University. This essay is adapted from his 2023 commencement address at Purdue.

BACK TO TOP

The End of ‘Reading Season’

AI will free the admissions staff from the drudgery of poring over applications.

BY RICK CLARK

The landscape of college admissions is bifurcated. The “haves” — colleges with big national reputations and large endowments — tout escalating numbers of applicants and precipitous drops in admit rates. Conversely, at the “have-nots,” discount rates rise, the threat of layoffs and furloughs becomes more common, and the pressure to reverse fiscal shortfalls expands with each cycle.

Behind the numbers are people: admissions deans, directors, and their staffs. And while their day-to-day pressures are sharply different, their posture is the same: broken.

Two recent Chronicle articles captured the causes: low salaries, stunted career progress, and unreasonable work-life balance, to name a few. Answers, however, have been in short supply. While AI is not a panacea, it is a significant part of the solution.

Colleges that invest in and rigorously train AI to generate automated, high-quality responses to student inquiries will free admission staff members from perfunctory, time-consuming, and low-return work, allowing them to focus instead on higher level, mission-aligned efforts. That shift will increase professional-growth opportunities and network building, both of which will foster job satisfaction and retention.

Admissions officers at the “haves” are overwhelmed by the volume of applicants and the unforgiving calendar, which requires receipt, review, and return of decisions in a matter of weeks or months. Rolling out AI software that can map prior admissions decisions, assess the performance of current students with similar profiles, and make preliminary recommendations will allow admissions officers to spend far less time reading essays and combing through student activities.

Monotony leads to burn-out. And nothing says monotony like “reading season” (the endless hours, days, and weeks admissions officers spend poring over applications). The efficiency and acceleration of AI models are the interrupter necessary to allow admissions officers to look less at the trees (individual applications) and more at the forest (institutional goals, mission, and priorities).

Many view AI as a threat. But when it comes to college admissions, I have no doubt that AI will be an agent of redemption.

Rick Clark is assistant vice provost and executive director of undergraduate admissions at the Georgia Institute of Technology.

BACK TO TOP

An Optimistic View

AI will make the university more human.

BY LEON BOTSTEIN

The current furor about AI is too little and too late, and has turned into the sort of hysteria that has accompanied most if not all technological revolutions in the West. The printing press inspired fear and panic among the minuscule portion of the European population that had been literate. Printing led to mass literacy, which then was held responsible for the Terror during the French Revolution; pessimism surrounded mass print journalism and fiction. Balzac and Flaubert reveled in depicting the corrosive influence cheap romance novels and newspapers had on their readers. The current debate over AI is vaguely reminiscent of other past controversies about the influence of recording (gramophone and radio), and later in the 20th century, film and television. We are still struggling with understanding the impact of the computer, the internet, and the smartphone. But we should remember, as a cautionary tale, that the railroad was once considered deleterious to health and an engine of cultural decline.

Yet we ought not underestimate the potential of AI. It must be regulated. But its transformational power and progress cannot and will not be stopped. Nor should it be. After taking the proverbial deep breath, we must approach the future with reasoned optimism. AI will certainly force us to concentrate on those talents and skills that will remain uniquely human. For those concerned about students’ abusing the power of ChatGPT, we just have to take the time and make sure we know our students and have worked with them closely enough to both inspire them to do their own work and take pride in work that is their own. We will have to become smarter in detecting manipulation and the deceptive simulation of reality. We have to find the legal and political means to control the vast private corporations that will own this technology. AI, like nuclear weapons, will remind us that the idea that we can live in modernity without government is an illusion.

Most of all, we need to focus on the benefits of AI, particularly in medicine and science. The most significant challenge to colleges will be how to prepare students for a world dominated by AI. How can we help them find joy, meaning, and value, and also gainful employment, when machines will do so much of what we do now? AI will eliminate far more employment than the automatic elevator, E-ZPass, and computer-controlled manufacturing already have. Work, employment, and compensation will have to be redefined, but in a more systematic manner than after the agricultural and industrial revolutions. Without that effort, the ideals of freedom, individuality, and the protection of civil liberties will be at risk.

It is only in the cultivation of human experience — our capacity to think, to link the mind with the heart in writing, imagining, and creating — that we will remain free in an AI-dominated universe. Our affections, our desire to interact with one another — to dream, alone and with others — cannot be supplanted. The promise of those experiences lies squarely in the university, but only if it is prepared to return to its proper human scale in teaching and research, and to abandon its corporate character and outdated concepts of segmented academic professionalism, particularly in the disciplines most needed for creating a better world in the age of AI, those of the arts and humanities.

Leon Botstein is president of Bard College.

BACK TO TOP

Problems We Will Solve

We must consider the technology’s upside.

BY DARRYLL J. PINES

When I began putting together my “State of the Campus” presentation to the University Senate in late February, I workshopped the introduction with not only staff members in the president’s office and our Office of Marketing and Communications, but also with one of the newest arrivals to higher education: ChatGPT.

I didn’t do this as a PR stunt, or to demonstrate how new artificial-intelligence systems will uproot and replace traditional learning — or university faculty and staff. I did it to show that ChatGPT, and whatever comes to compete with it or take it to the next level, is not something we need to fear.

Every generation of students comes of age with new technology. From the calculator and the personal laptop to smartphones to Zoom, each has been initially met with angst about the disruption to traditional teaching. We fear foundational knowledge will be replaced by robotic inputs and outputs, or that personal interactions unmediated by screens will be eliminated. And so the new technology can seem an obstacle to the parts of the educational experience we love the most — the look when a student first grasps a difficult concept, the spark from an original idea during a brainstorming session, the give-and-take of a classroom debate.

But higher ed has a long history of adapting to new events, needs, and technologies: Like the calculator, the personal laptop, smartphones, and Zoom, AI may soon become integral to our enterprise.

We must, of course, confront the technology’s capability to increase plagiarism, the spread of misinformation, and confusion over intellectual-property rights, but the positive potential of AI deserves just as much of higher education’s attention as its pitfalls do. If we are to solve the grand challenges of our time, from climate change and gun violence to pandemics and food, water and energy security, we will need to use every tool we have. And we will also need to maintain the optimistic drive to discover what has made us capable of extraordinary feats of intellectual achievement, from vaccines and the moon landing to the early stages of quantum computing.

The leaders of Bell Laboratories, one of the most prolific and revolutionary research groups in human history, were fond of saying that while there are plenty of good ideas out there, the more important search was for good problems to solve. What might artificial intelligence mean for our society? That’s a good problem to solve. Let’s get to work.

Darryll J. Pines is president of the University of Maryland at College Park, where he is also a professor of aerospace engineering.

BACK TO TOP

We’re Asking the Wrong Questions

Our panicked reaction to AI is what needs examining.

BY DANAH BOYD

Icannot seem to escape debates about how AI will change everything. They make me want to crawl into a cave. I nod along as my peers blast students for cheating with ChatGPT — then rave about how Bing Chat’s search function is a more effective research assistant than most students. The contradictions that surround new AI chatbots are fascinating, but the cacophonous polarities are excruciating, and obscure as much as they illuminate.

Technology mirrors and magnifies the good, the bad, and the ugly. But the technology itself is rarely the story. If we care about higher ed, the real question isn’t: What will AI do to higher ed? Instead, we should ask: What can the hype and moral panic around AI teach us about higher ed?

After a long history of reinforcing inequities and upholding a culture of white supremacy, parts of higher education have spent the last couple of decades reckoning with its history and reimagining its future. Unfortunately, this reckoning has coincided with a decline in government support for both research and education, a drastic increase in the cost for students, and an increase in public skepticism about the value of the project. In short, we find ourselves in the center of a political and economic maelstrom.

We’re anxious about what AI will do to higher ed because we’re anxious about the future of higher ed. But let’s not allow AI to distract us from the questions that should animate us: What is the role of higher education in today’s world? What societal project does higher education serve? And is this project viable in a society configured by late-stage capitalism? These are hard questions. AI will not solve any of them.

danah boyd is a partner researcher at Microsoft and a visiting professor at Georgetown University.

 

0 Comments

Submit a Comment

Tu dirección de correo electrónico no será publicada. Los campos requeridos están marcados *

PUBLICACIONES

Libros

Capítulos de libros

Artículos académicos

Columnas de opinión

Comentarios críticos

Entrevistas

Presentaciones y cursos

Actividades

Documentos de interés

Google académico

DESTACADOS DE PORTADA

Artículos relacionados

Share This