How Best to Measure the Value of Research
August 8, 2013, 12:10 pm
The following is a guest post by Michael Spence, vice chancellor of the University of Sydney.
———————————————————————————————–
When it comes to judging the value of publicly supported research, which measure is better: “quality” or “impact”?
Of course all research is designed to have impact, like generating new understanding of the universe or of our collective history, and developing new technologies. The challenge is that impact, as defined in this context, is always short term. In the aftermath of the global financial crisis, with governments worldwide striving to balance budgets, there is a risk that a focus on demonstrating short-term impact will override more substantial considerations in research decisions.
An approach based solely on impact fundamentally undermines the innovation system by removing the one player capable of making long-term, infrastructure-intensive research investments: universities. Business generally seeks return on investment over a period of a few years. If universities take a similar approach, there will simply be no other entities globally capable of supporting research on 20-, 30-, or 50-year time horizons.
It is unfortunately an easier PR exercise for governments to say they allocated research funds based on easily perceived impact, with the built-in implication that this is research that matters. It is more difficult to explain how quality research—which may not have easily identified benefits until many successive governments have come and gone—could be a far superior option.
With Britain set to move to its 2014 Research Excellence Framework (with impact a factor in the allocation of funds), Australian policy makers have also flirted with assessing research applications via impact.
In 2007, the Australian government proposed a Research Quality Framework that would require researchers to submit evidence portfolios and impact statements. Impact measures included: outstanding social, economic, environment, or cultural benefit; adoption of new policies, products, attitudes, and behaviors; or efforts to deal with issues of societal importance. All things that are easily observed over the life cycle of a government.
Sen. Kim Carr, the minister for innovation, industry, science, and research, has—to my mind wisely—not been supportive of such short-term measures of impact. Other officials in his position, however, have briefly revisited the framework, including running trials of the viability of impact measures. Such trials revealed a heavy administrative time and cost burden, as well as difficulties including proving causality of public funds and impact, failure to acknowledge external contributions, and overstated or unsubstantiated claims.
With Australia facing an election in September, there is no certainty that the impact measure will not be resurrected as our political parties finalize their higher-education policies. There are several reasons I believe we should not follow Britain down this path.
First, impact measurement works against fundamental or basic research due simply to the relevant timescales. This is the scholarly work about which Australia’s top research universities, known as the Group of Eight, say, “investment in basic research is equivalent to long-term, patient, capital investment that creates new infrastructure and will provide a sustained flow of opportunities well into the future, many of which we cannot yet envisage.” For instance, modern computers owe their existence to the pure research in mathematics and quantum physics conducted over a century ago, for which there was no known practical application at the time.
Second, impact measures also work against exploratory research and shift emphasis away from trust in quality researchers. To extend the computer example, Michael Biercuk is an experimental physicist in the Australian Research Council Centre for Engineered Quantum Systems at the University of Sydney and primary investigator in our quantum control laboratory. His research group performs cutting-edge basic and applied research experiments using trapped atomic ions for the development of new quantum-enabled technologies. In 2012, Mr. Biercuk was part of an international team that developed an ion-crystal “quantum simulator” with the possibility to one day become one of the world’s most powerful computers.
But despite this prestige, the time scale for true impact of this work—as defined under the framework proposed here—is very long, perhaps decades, according to Mr. Biercuk.
The work of Mr. Biercuk and his global collaborators is of immense quality and holds enormous potential to shape the future, but what products or economic benefits could we measure today? Do we even fully understand what the impacts might be in 20 or 30 years?
As one Australian deputy vice chancellor opined recently, “It can take more than 15 years for a research discovery in a biomedical lab to translate into day-to-day clinical practice. At what point along the lengthy pathway from discovery to application does it become clear that a research project will make an impact beyond the lab?”
Third, an impact measure would likely work against humanities and social-sciences research, where short-term impact is difficult to quantify against the framework. Nonetheless, as the Group of Eight universities argue, humanities, arts, and social-sciences research often has tremendous long-term impact “behind the scenes.” This work supports the development of government policies, expands understanding of issues such as land rights and mining, and can shape society even more than work in the sciences because it is often easier to understand. But this may not happen for years as policy evolves.
At the University of Sydney, Alison Bashford, a professor of modern history, is researching climate change and the history of environmental determinism. Her work could provide invaluable insights into how the world’s population has historically viewed its interaction with the planet and how this has changed. This could make an outstanding contribution to the communication of future environmental-policy responses, with world impact.
As Australia’s Group of Eight argues, “selecting projects for funding according to the excellence of the research proposal and the track record of the researcher is just as much a priority-setting process as is allocating funding on the basis of the research—and should in any case underlie these other kinds of priority-setting processes. There are many studies that demonstrate quite explicitly that research judged as ‘excellent’ through peer review and by the use of citation measures is the research that is most likely to produce significant benefits beyond the research system, even though the researchers concerned did not set out to achieve these benefits directly.”
Allocating funds based on an impact measure favors research of demonstrated, short-term value. It minimizes risk, but additionally minimizes the potential to reshape the future.
“High-impact” research, as defined under rubrics like the one proposed in Australia or Britain’s Research Excellence Framework, is better supported by business and industry—a sector of the innovation system less capable of making long-term investments in the generation of new knowledge.
The innovation system functions best when different players contribute in unique ways. Universities should not perform research like a profit-motivated enterprise would. And they should not be encouraged to do so by government policy at the expense of their core mission: the generation and dissemination of knowledge to young scholars and the wider world.
As the Australian Productivity Commission, the Australian government’s independent economic research-and-advisory body, concluded in 2007, “the benefits are likely to be high for R&D in universities and public-sector research agencies, due to their orientation to public-good research and their role in the development of high-quality human capital.”
Throughout its publications and research profiles, the University of Sydney proudly boasts about the ways in which our research is changing the world. This has come through support of quality research—and quality researchers—with limited concern for short-term payoffs. Pitting quality measures against impact ones perpetuates what is really a false dichotomy. Supporting quality and excellence is the key to innovation and the global competitiveness of economies.
0 Comments