Gregory Elacqua circula la información relativa al ranking de investigadores educacionales en los EEUU según su impacto público, de cuya versión anterior habíamos dado cuenta hace un tiempo. Acompaña la información por la siguiente nota:
Una iniciativa muy interesante de Rick Hess, Director de Education Policy Studies en American Enterprise Institute en Washington DC, de rankear el impacto público de los academicos en EEUU que trabajan en políticas educacionales. Utiliza los siguientes metricas: (1) Puntaje en Google Scholar; (2) Puntaje de libros; (3) Amazon book rating; (4) Menciones en la prensa educacional; (5) Menciones en los blogs; (6) http://blogs.edweek.org/edweek/rick_hess_straight_up/2012/01/rhsu_exclusive_the_five-tool_policy_scholar_1.htmlMenciones en los diarios; (6) Menciones en los archivos del Congreso (ej. presentaciones en la comisión de educación). Linda Darling Hammond, Diane Ravitch, Eric Hanushek y Larry Cuban son los top 4. Sería interesante replicar el ranking de Hess en Chile y otros países en América Latina.
Gregory Elacqua
Director
Instituto de Políticas Publicas
Facultad de Economía y Empresa
Universidad Diego Portales
RHSU Exclusive: The Five-Tool Policy Scholar
By Rick Hess, Education Week, on January 3, 2012 7:51 AM
Tomorrow in this space, I’ll be publishing the 2012 RHSU Edu-Scholar Public Presence Rankings. Today, just like last year, I want to take a few moments to explain what those ratings are about and how they were generated.
The exercise starts from two simple premises: 1] ideas matter and 2] people tend to devote more time and energy to those activities which are acknowledged and lauded. The academy today does a passable job of recognizing good disciplinary scholarship but a pretty mediocre job of recognizing scholars who effectively help to move ideas from the pages of barely-read journals into the national conversation around schools and schooling. This state of affairs may work fine when it comes to the study of material science or Renaissance poetry, but it doesn’t cut it for those wanting to encourage social scientists with something to say to wade responsibly into public debates.
In baseball, the ideal is the “five-tool” ballplayer. This is a player who can run, field, throw, hit, and hit with power. A terrific ballplayer might excel at just a couple of these, but there’s a special appreciation for those with a full suite of skills.
Among scholars who do policy-relevant research, there’s an analogous need for us to do a much better job appreciating scholars who do more than publish opaque articles in niche journals, sit on committees, and serve as officials in professional associations. To my mind, the engaged policy scholar is a “five-tooler” in her own right.
As I see it, the extraordinary policy scholar excels in five areas: disciplinary scholarship, policy analysis and popular writing, convening and quarterbacking collaborations, providing incisive media commentary, and speaking in the public square. It’s the scholars who are skilled in most or all of these areas who can cross boundaries, foster crucial collaborations, and bring research into the world of policy in smart and useful ways. The academy, though, treats many of these skills as an afterthought–if not an outright blemish on a scholar’s record! And while foundations fund evaluations, convenings, policy analysis, and dissemination, few make any particular effort to develop multi-skilled scholars or support this whole panoply of activity.
Today, academe offers big professional rewards for scholars who stay in their comfort zone while pursuing narrow, hyper-sophisticated research, but little recognition, acknowledgment, or support for scholars who operate as “five-tool” scholars. One result is that the public square is filled by impassioned advocates, while we hear far less than I’d like from those who are more versed in the research and equipped to recognize complexities and explain hard truths. Now, one can hardly blame those academics who seek to avoid the unpleasantness by remaining swaddled in the pleasant irrelevance of the ivory tower. After all, wading into the public debate can anger friends and call forth vituperative personal attacks. One small way to encourage academics to step into the fray and to push back on the academic norms fueling the status quo is, I think, to do more to recognize the value of engaging in public discourse and the scholars who do so.
With that aim, tomorrow’s Edu-Scholar rankings offer one way to gauge whether and how scholars are impacting the public discourse. The scores really reflect three things: the influence of a scholar’s articles and academic scholarship, their body of work when it comes to books, and their impact on conversation as reflected in old and new media. Broadly speaking, the scores generally draw about 40 percent on scholarly influence in terms of bodies of work and citation counts, 25 percent on book authorship and current book success, and about 35 percent on presence in new and old media.
Readers will note that the rankings do not address things like teaching, mentoring, and community service. Such is the nature of things. These scores are not imagined as a summative measure of a scholar’s contribution to teaching and knowledge. Rather, they are a counterpart to traditional publication-heavy measures of research productivity. Those results tell us something, but don’t offer much insight into how scholars in a field of public concern are influencing thinking and the national discourse. These results are designed to say more on that score.
The RHSU Edu-Scholar Public Presence Scoring Rubric
We opted to employ metrics that are publicly available, readily comparable, and replicable by third parties. This obviously limits the nuance and sophistication of the measures. The scoring is determined as follows:
Google Scholar Score: This figure gauges the number of articles, books, or papers a scholar has authored that are widely cited. A neat, commonly used technique for measuring breadth and impact is to tally the scholar’s works in descending order of how often each is cited, and then to identify the point at which the number of works is finally exceeded by the cite count for the least-frequently cited article. For instance, a scholar who had 10 works that were each cited at least 10 times, but whose 11th most-frequently cited work was cited just 9 times, would score a ten. A scholar who had 27 works cited at least 50 times, but whose 28th work was cited 27 times or fewer, would receive a 27. An assistant professor will typically have a number in the low single digits, while veteran scholars may score a 40 or higher. This reflects the fact that bodies of work matter, by influencing what others think and how issues are understood. By design, this bias favors veteran scholars. The search was conducted on December 20-21, 2011, using the scholar’s name under the “author” filter in an advanced search in Google Scholar, with the search limited to the “Business, Administration, Finance, and Economics” and “Social Sciences, Arts, and Humanities” categories. A hand-search culled out works by other, similarly named, individuals. While Google Scholar has its flaws and is less precise than more specialized citation databases for such a search, it has the virtues of being multidisciplinary and publicly accessible. This category ultimately counted the most–amounting to between 25 percent and 60 percent of the score for most scholars–as it’s a quick way to gauge both the expanse and influence of a scholar’s body of work.
Book Points: An author search on Amazon was used to tally the number of books a scholar had authored, co-authored, or edited. Scholars received 2 points for a single-authored book, 1 point for a co-authored book in which they were the lead author, a half-point for co-authored books where they were not the lead author, and a half-point for any edited volume. The search was conducted using an “Advanced Books Search” for the scholar’s first and last name. (On a few occasions, a middle initial or middle name was used to avoid duplications with authors who had the same name, e.g. “David Cohen” became “David K. Cohen,” and “Deborah Ball” became “Deborah Loewenberg Ball.”) The “format” searched “Printed Books” so as to avoid double-counting books which are also available as e-books. This obviously means that books released only as e-books are omitted. However, circa 2011, that seemed a modest price to avoid double-counting and to maximize accuracy (given that very few relevant books, as of yet, are released only as e-books; this is likely to change in fairly short order.) In each category, a hand-search sought to guard against double-counting and to ensure an accurate score. Amazon-available reports and articles were excluded, as was any source listed as “out of print”–only published, available books were included. The search was conducted December 20-21. The high score in this category was 37.5, but most scholars scored between zero and 20.
Highest Amazon Ranking: The author’s highest-ranked book on Amazon, as of December 20-21. The highest-ranked book was subtracted from 400,000, and that figure was divided by 20,000 to derive a point total of somewhere between zero and 20. This score, due to the nature of Amazon’s ranking algorithm, is fairly volatile and biased in favor of more recent works. For instance, a book may have been very influential in the 1990s, impacting citation counts and the likelihood that a scholar is quoted in newspapers, but may not produce points in this category in 2011. The result is a decidedly imperfect way to gauge the impact of books, but one that conveys real information. To that point, many of the books that have stoked public discussion in the past few years fared relatively well. About a third of the scholars examined, including fifteen of the top twenty, scored points in this category.
Education Press Mentions: The total number of times the scholar was quoted or mentioned in Education Week or the Chronicle of Higher Education between January 1 and December 20-21. The search was conducted using each scholar’s first and last name. To norm the value of this category, the total number of appearances was divided by 2 to calculate Ed Press points. Scores in this category ranged from zero to 41.5, with most falling between zero and ten.
Blog Mentions: Based on a search using Google Blogs, this reflects the number of times a scholar was quoted, mentioned, or otherwise discussed in blogs between January 1 and December 20-21. The search was conducted using each scholar’s name, plus their affiliation (e.g. “Bill Smith” and “Rutgers”). Requiring university affiliation serves a dual purpose: avoiding confusion due to common names while ensuring that scores aren’t padded by a scholar’s blog posts (which generally don’t identify a scholar by affiliation). If bloggers are provoking discussion, the figures will reflect that. If a scholar is mentioned sans affiliation, that mention is omitted here; but that’s true across-the-board. If anything, that probably tamps down the scores of well-known scholars for whom university affiliation may seem unnecessary. Especially since the Ravitches, Hanusheks, Arums, and Darling-Hammonds still fare just fine, I’m good with that. Because blogging can tend towards the informal, the blog search also included the most common diminutive for a given scholar (e.g., “Rick Hanushek” as well as “Eric Hanushek;” “Pat McGuinn” as well as “Patrick McGuinn”). To norm the value of this category, points were calculated by dividing the total number of mentions by four. We also chose to cap the scores at 50 points to ensure that the rankings recognize impactful contributions without allowing the blog metric to overwhelm the other metrics. Twelve scholars hit the 50 point cap, but the vast majority of scholars scored between zero and 20.
Newspaper Mentions: Based on a search using Lexis Nexis, the number of times a scholar was quoted or mentioned in U.S. newspapers between January 1 and December 20-21. Like Blog Mentions, the search was conducted using each scholar’s name plus their affiliation. To norm the value of this category, points were calculated by dividing the total number of mentions by four. Scores ranged from zero to 26.8, with most falling between zero and ten.
Congressional Record Mentions: We conducted a simple name search in the Congressional Record for 2011 to determine whether a given scholar was called to testify or if their work was referenced by a member of Congress. The reference or testimony had to have occurred on or before December 21. If a scholar was included in either capacity, they received five points in this category.
There are obviously lots of provisos in making sense of the results. Different disciplines approach books and articles differently. Scholars of K-12 and higher education may have different opportunities to engage in the public square. Senior scholars have obviously had more of a chance to build a body of work.
Moreover, some readers may have more use for some of these categories than for others. That’s fine. The whole point is to encourage discussion and debate about the nature of responsible public engagement, who’s doing a particularly good job of it, how much these things matter, and how to accurately measure a policy scholar’s contribution.
Two questions sure to arise: Can somebody game this rubric? Am I concerned that this exercise will encourage academics to chase publicity? As for gaming, I’m not at all concerned. If scholars (against all odds) are motivated to write more relevant articles, pen more books that might sell, or be more aggressive about communicating their ideas and research in an accessible fashion, I think that’s great. That’s not “gaming,” it’s just good public scholarship. As for academics working harder to communicate beyond the academy–well, there’s obviously a point where public engagement becomes sleazy P.R., but most academics are so immensely far from there that I’m not unduly concerned.
A final note. Tomorrow’s rankings will feature 121 university-based edu-scholars who are widely regarded as having some public presence. However, this list is not intended to be exhaustive. There are many other faculty addressing public questions of education or education policy, and some of them may grade out quite highly on these metrics. Tomorrow’s scores are for a prominent cross-section of faculty, from various disciplines, institutions, generations, and areas of inquiry. For those interested in scoring additional scholars, it should be straightforward to do so using the rubric sketched above. Indeed, the exercise was designed so that anyone with an Internet connection can generate a comparative rating for a given scholar in no more than 15-20 minutes. (At this end, for his assiduous labor and invaluable advice on how to pull this together, I owe a big shout-out to my indefatigable and eagle-eyed research assistant, Daniel Lautzenheiser. I also want to give a shout-out to his colleagues Becky King and Taryn Hochleitner).
0 Comments