¿Cómo medir el impacto total de la producción académica?
Febrero 19, 2012

chronicle_logo.gif

Scholars Seek Better Ways to Track Impact Online
By Jennifer Howard for The Chronicle, January 29, 2012
In academe, the game of how to win friends and influence people is serious business. Administrators and grant makers want proof that a researcher’s work has life beyond the library or the lab.
But the current system of measuring scholarly influence doesn’t reflect the way many researchers work in an environment driven more and more by the social Web. Research that used to take months or years to reach readers can now find them almost instantly via blogs and Twitter.
That kind of activity escapes traditional metrics like the impact factor, which indicates how often a journal is cited, not how its articles are really being consumed by readers.
An approach called altmetrics—short for alternative metrics—aims to measure Web-driven scholarly interactions, such as how often research is tweeted, blogged about, or bookmarked. “There’s a gold mine of data that hasn’t been harnessed yet about impact outside the traditional citation-based impact,” says Dario Taraborelli, a senior research analyst with the Strategy Team at the Wikimedia Foundation and a proponent of the idea.
As Scholarship Goes Digital, Academics Seek New Ways to Measure Their Impact 2
Jason Priem, a graduate student at the U. of North Carolina at Chapel Hill, is part of the team that created Total-Impact, a system to track the movement of research across the Web.
Interest in altmetrics is on the rise, but it’s not quite right to call it a movement. The approach could better be described as a sprawling constellation of projects and like-minded people working at research institutions, libraries, and publishers.
They’ve been talking on Twitter (marking their messages with the #altmetrics hashtag), sharing resources and tools online, and developing ideas at occasional workshops and symposia. They’re united by the idea that “metrics based on a diverse set of social sources could yield broader, richer, and timelier assessments of current and potential scholarly impact,” as a call for contributions to a forthcoming altmetrics essay collection puts it.
Jason Priem, a third-year graduate student at the School of Information and Library Science at the University of North Carolina at Chapel Hill, is a leader in this push to track impact via the social Web. Scholarly workflows are moving online, leaving traces that can be documented—not just in articles but on social networks and reference sites such as Mendeley and Zotero, where researchers store and annotate scholarship of interest. “It’s like we have a fresh snowfall across this docu-plain, and we have fresh footprints everywhere,” he says. “That has the potential to really revolutionize how we measure impact.”
Mr. Priem helped write a manifesto, posted on the Web site altmetrics.org, which articulates the problems with traditional evaluation schemes. “As the volume of academic literature explodes, scholars rely on filters to select the most relevant and significant sources from the rest,” the manifesto argues. “Unfortunately, scholarship’s three main filters for importance are failing.”
Peer review “has served scholarship well” but has become slow and unwieldy and rewards conventional thinking. Citation-counting measures such as the h-index take too long to accumulate. And the impact factor of journals gets misapplied as a way to assess an individual researcher’s performance, which it wasn’t designed to do.
“I’m not down on citations,” Mr. Priem says. “I’m just saying it’s only part of the story. It’s become the only part of the story we care about.”
That’s where altmetrics comes in. It’s a way to measure the “downstream use” of research, says Cameron Neylon, a senior scientist at Britain’s Science and Technology Facilities Council, and another contributor to the manifesto. Any system that turns out to be a useful way to measure influence will tempt the unscrupulous to try and game it, though. One concern is that someone could build a program, for instance, that would keep tweeting links to an article and inflate its altmetrics numbers.
Devising a Method
So how do you reliably measure fluid, fast-paced, Web-based, nonhierarchical reactions to scholarly work? That problem has been keeping Mr. Priem busy. He’s part of the team that designed an altmetrics project called Total-Impact.
Researchers can go to the site and enter many forms of research, including blog posts, articles, data sets, and software they’ve written. Then the Total-Impact application will search the Internet for downloads, Twitter links, mentions in open-source software libraries, and other indicators that the work is being noticed. “We go out on the Web and find every sort of impact and present them to the user,” Mr. Priem explains. When possible, they gather data directly from services’ open-application programming interfaces, or API’s.
These are very early days for Total-Impact, and there’s a lot of information it doesn’t gather yet. For instance, right now it only searches blogs indexed by the site Research Blogging. That “amounts to a very small subset of science blogs,” according to Mr. Priem, who adds that most of the other metrics are more robust.
“Although it’s still in alpha and has plenty of bugs, if you upload identifiers, you can and do get all sorts of impact information back,” he says. “We’ve gotten many reports of people using the application, although certainly not in vast numbers” yet. “We’ve also gotten many requests from academic publishers and creators of scholarly Web applications to embed TI data into their pages” using Total-Impact’s open API, he says.
He doesn’t know yet how significant Total-Impact will prove to be. Will scholars take to it? Will tenure-and-promotion gatekeepers be willing to add altmetrics to the evaluation mix any time soon? Those are big unknowns right now. The long-term goal is “to completely change the way scholars and administrators think about academic impact” and get them to move away from what Mr. Priem calls “a citation-fetishizing article monoculture.” But he’s realistic. “Clearly, that’s going to take some time,” he says.
The Total-Impact site features several cautions about how it should and should not be used. It may help a researcher ascertain the “minimum impact” his or her work has made on the scholarly community; it can provide a sense of who’s bookmarking or responding to that work. But it’s not yet an indicator of comprehensive impact. “Take it all with a grain of salt,” a warning on the site advises. “The meaning of these metrics are not yet well understood.”
One of Mr. Priem’s Total-Impact partners is Heather A. Piwowar. As a postdoctoral researcher at DataOne, affiliated with the National Evolutionary Synthesis Center and the Dryad digital repository, she studies patterns in how researchers share and reuse data. She and Mr. Priem have been building Total-Impact in their spare time. “Our day jobs are being a grad student and a postdoc,” she says, but “we just couldn’t stop ourselves. It seemed to have such profound possibilities.”
The main difficulty they’ve encountered, she says, is finding sources of open data. Every blog post has a URL, and “you can search Twitter and other places for that URL,” she says. But the Total-Impact algorithms can’t just rely on Google searches, because those “aren’t open and free data,” she says. There’s a lot of information behind the results of a Google search that Total-Impact can’t really get to yet.
Another technical challenge for altmetrics is what to do about multiple digital “addresses” for a specific article online. Someone who tweets about a paper will probably link to a URL but not include the digital object identifier, or DOI, that makes the paper more permanently findable online, even if the URL changes. “So it’s been more of a challenge than we expected to gather all of the synonym identifiers for an object and then search for all of them” in all the places where people might leave evidence of use, Ms. Piwowar says.
Right now, the Total-Impact group has to go ask Mendeley for an article’s permanent Mendeley address, or “identifier,” PubMed for its identifier, and so on. “Having one place where a lot of these identifiers are aggregated would be very helpful,” she says.
Software and data can be especially tricky to track. A piece of code may be hosted by an open repository like GitHub but not cited in ways that are easily recognized.
And scholarly culture doesn’t always encourage openness. “There’s a lack of reward for sharing data,” Ms. Piwowar says.
Altmetrics’ emphasis on openness aligns it with the open-access movement, whose goal is to make published research freely available online. “Once you see the potential for using the Web for research communication,” says Britain’s Mr. Neylon, it’s hard to look at the traditional model of scholarly communication “without a growing sense of horror.”
Altmetrics has made some inroads in the publishing world. For instance, one open-access publisher, the Public Library of Science, or PLoS, has been experimenting seriously with article-level metrics, a fresh way to measure who’s using PLoS articles and how.
Unlike PLoS, however, many publishers are not keen to share usage statistics with the world. Neither are some institutional repositories.
Ms. Piwowar says that proprietary attitude is the wrong approach for publishers to take. Altmetrics “is a call to people who host research projects to make information about their impact openly accessible,” she says.
Gaming the System
As its proponents themselves acknowledge, the altmetrics approach has vulnerabilities that go beyond how much data can be had for free. Just because an idea gets buzz online doesn’t always mean it has genuine intellectual value, as anyone who follows social media knows. And what about gaming the system?
“Can Tweets Predict Citations?” asked a paper published last year in the Journal of Medical Internet Research by Gunther Eysenbach, a senior scientist and professor of health policy at the University of Toronto. Based on a survey he conducted, Dr. Eysenbach concluded that the answer is yes; tweets often do flag papers that turn out to be important.
But measures of influence on Twitter “should be primarily seen as metrics for social impact (buzz, attentiveness, or popu­larity) and as a tool for researchers, journal editors, journalists, and the general public to filter and identify hot topics,” the researcher wrote. He cautioned that significant research in many fields wasn’t necessarily going to get picked up by people who are on Twitter.
But traditional citations too have limitations, Dr. Eysenbach pointed out; social-media-based metrics should be considered complementary to citations rather than alternatives to them.
The key question might be how vulnerable altmetrics, or any metrics, is to being gamed. Traditional measures of influence aren’t immune to corruption; journals have been known to drive up their impact factors by self-citing.
Mr. Taraborelli of the Wikimedia Foundation says “we should expect major attempts at gaming the system” if and when altmetrics really catches on. “My expectation is it will be an arms race,” he says. But there are ways to build in safeguards against gaming, he says, much as people keep creating better spam filters.
The inclusive, diffuse approach that drives altmetrics may actually help protect it. A Godzilla-like monster ranking “is the best way to manipulate the system, to make it dependent on curation strategies that may end up invalidating the metric itself,” Mr. Taraborelli says. “The last thing we want is a system that’s dominated by a monolithic ranker for all the scholarly literature.”
Researchers’ behavior on the social Web works against the idea that one number should rule them all, Mr. Taraborelli says: “I think we’re moving to a system where, regardless of the benefits of single, monopolistic metrics, people will be able

0 Comments

Submit a Comment

Tu dirección de correo electrónico no será publicada. Los campos requeridos están marcados *

PUBLICACIONES

Libros

Capítulos de libros

Artículos académicos

Columnas de opinión

Comentarios críticos

Entrevistas

Presentaciones y cursos

Actividades

Documentos de interés

Google académico

DESTACADOS DE PORTADA

Artículos relacionados

Share This