With economists struggling to derive credible explanations of macroeconomic conditions from the prevailing models, those who have long criticized the discipline for its scientific pretensions are feeling vindicated. And it remains to be seen whether economists will – or should – heed the feedback.
In this Big Picture, Mark Cliffe of ING Group points to three major lessons from the past decade that mainstream economists still have refused to accept, owing to their commitment to discredited assumptions. And Robert Skidelsky of Warwick University explains, more broadly, how economists’ quest for predictive certainty led them to double down on mathematical modeling and ignore human and historical contingencies.
Still more broadly, Cambridge’s Diane Coylepoints out that all of academia – not just economics – has succumbed to an increasingly narrow-minded specialization. And Harvard’s Ricardo Hausmann rejects the notion that economics should bear the blame for outcomes that actually emanate from shortcomings in the field of public policy.
What Economists Still Need to Learn
AMSTERDAM – Macroeconomics was one of the casualties of the 2008 global financial crisis. Conventional macroeconomic models failed to predict the calamity or to provide a coherent explanation for it, and thus were unable to offer guidance on how to repair the damage. Despite this, much of the profession remains in denial, hankering for a return to “normal” and in effect treating the crisis as just a rude interruption.
That needs to change. Although an economic recovery has taken root, its structural fragilities suggest that macroeconomics is still in pressing need of an overhaul. Three sets of lessons from the past decade stand out.
First, the presumption that economies are self-correcting, while tempting in good times, is unfounded and can have catastrophic consequences. The recovery of the past few years has lulled many into a false sense of security, because it was the result of unconventional policy responses that transcended mainstream “general equilibrium” thinking.
Moreover, pre-crisis economic models are struggling to cope with the disruption unleashed by emerging digital technologies. The digital economy is characterized by increasing returns to scale, whereby Big Tech companies rapidly exploit network effects to dominate a growing array of markets. This has upended incumbent business models and transformed behavior in ways that have left macroeconomists and policymakers struggling – and mostly failing – to keep pace.
Consequently, the widespread belief that economic activity will follow a regular cycle around a stable growth trend is not very helpful beyond the very short term. Rather, the economic disruptions we are experiencing highlight an obvious fact, but one that prevailing models assume away: the future is fundamentally uncertain, and not all risks are quantifiable.
Precisely for that reason, we should reject the notion that emerged in the aftermath of the crisis that the world would enter a “new normal.” In the face of evolving structural shifts in finance, technology, society, and politics, it is far more useful to think in terms of a “New Abnormal,” in which economies are characterized by actual or latent structural instability.
The second lesson from the crisis is that balance sheets matter. The financialization of the global economy leaves national economies vulnerable to major corrections in asset prices that can render debt unserviceable. Macroeconomic models that focus on flows of income and spending ignore the critical role played by such wealth effects. Compounding the problem, these models are unable to predict asset prices, because the latter reflect investors’ beliefs about future returns and risks. In other words, asset prices are hard to forecast because they are themselves forecasts.
Moreover, financial reregulation since the crisis has not necessarily solved the balance-sheet problem. True, individual banks have become more resilient as a result of having to raise their capital and liquidity buffers substantially. But years of unprecedented monetary easing and large-scale asset purchases by central banks have encouraged risk-taking across the economic and financial system in ways that are harder to track and predict. In addition, policymakers’ determination to limit taxpayers’ exposure when financial institutions fail has led to risks being shifted onto investors through the use of instruments such as “bail-in-able” bonds. The systemic effects of such ongoing regulatory changes won’t be clear until the next recession strikes.
There is also a growing recognition that financial balance sheets are not the only type that matter. As climate change and environmental degradation move up the political agenda, macroeconomists are beginning to appreciate the importance of other, less volatile forms of capital for sustainable growth and wellbeing. In particular, they need to understand better the interaction of produced capital, whether tangible or intangible; human capital, including skills and knowledge; and natural capital, which includes the renewable and non-renewable resources and environment that support life.
Lastly, macroeconomists must recognize that distribution matters. Trying to model households’ economic behavior on the basis of a single “representative agent” elides crucial differences in the experiences and behavior of people in different income and wealth brackets.
The fact that the rich disproportionately benefited from globalization and new technologies, not to mention from central banks’ successful efforts to boost equity and bond prices after 2009, has arguably been a drag on growth. What is certain is that widening inequality has dramatically reduced support for mainstream politicians in favor of populists and nationalists, in turn corroding the previous policy consensus that sustained fiscal probity, independent monetary policy, free trade, and the liberal movement of capital and labor.
The global backlash against the economic and political status quo has also targeted big business. In the immediate aftermath of the crisis, financial institutions were in the firing line. But popular anger has since morphed into a general skepticism about corporate behavior, with the tech giants coming under particular scrutiny for alleged abuses of user data and monopoly power.
It would be too simplistic to view these tensions as the result of resentment toward the top 1%. There are substantial divisions within the remaining 99% between winners and losers from globalization. Moreover, divisions between countries have intensified as populists and nationalists blame foreigners for domestic economic and social problems.
This has contributed to wider questioning of globalization and international trade, investment, and tax rules. Changes in global governance arrangements may disrupt business models, transform the institutional framework, and add a fresh layer of uncertainty to the economic outlook.
The macroeconomics profession has yet to come to terms with the most important lessons of the past decade. And without a new consensus on how to manage uncertainty, the world is uncomfortably vulnerable to fresh economic, social, and political shocks. Sadly, another crisis may be needed to force economists to abandon their outmoded ways.
The Fall of the Economists’ Empire
LONDON – The historian Norman Stone, who died in June, always insisted that history students learn foreign languages. Language gives access to a people’s culture, and culture to its history. Its history tells us how it sees itself and others. Knowledge of languages should thus be an essential component of a historian’s technical equipment. It is the key to understanding the past and future of international relations.
But this belief in the fundamental importance of knowing particular languages has faded, even among historians. All social sciences, to a greater or lesser degree, start with a yearning for a universal language, into which they can fit such particulars as suit their view of things. Their model of knowledge thus aspires to the precision and generality of the natural sciences. Once we understand human behavior in terms of some universal and – crucially – ahistorical principle, we can aspire to control (and of course improve) it.
No social science has succumbed to this temptation more than economics. Its favored universal language is mathematics. Its models of human behavior are built not on close observation, but on hypotheses that, if not quite plucked from the air, are unconsciously plucked from economists’ intellectual and political environments. These then form the premises of logical reasoning of the type, “All sheep are white, therefore the next sheep I meet will be white.” In economics: “All humans are rational utility maximizers. Therefore, in any situation, they will act in such a way as to maximize their utility.” This method gives economics a unique predictive power, especially as the utilities can all be expressed and manipulated quantitatively. It makes economics, in Paul Samuelson’s words, the “queen of the social sciences.”
In principle, economists don’t deny the need to test their conclusions. At this point, history, one might have thought, would be particularly useful. Is it really the case that all sheep are white, in every place and clime? But most economists disdain the “evidence” of history, regarding it as little better than anecdotage. They approach history by one route: econometrics. At best, the past is a field for statistical inquiry.
The economist Robert Solow offers a devastating critique of the identification of economic history with econometrics, or “history blind” as he calls it:
“The best and brightest in the profession proceed as if economics is the physics of society. There is a single universally valid model. It only needs to be applied. You could drop a modern economist from a time machine … at any time, in any place, along with his or her personal computer; he or she could set up in business without even bothering to ask what time and which place.”
In short, much of the historical modeling economists do assumes that people in the past had essentially the same values and motives as we do today. The Nobel laureate economist Robert Lucas carries this approach to its logical conclusion: “the construction of a mechanical, artificial world, populated by … interacting robots …, that is capable of exhibiting behavior the gross features of which resemble those of the actual world.”
The goal of economics is to replace the particular languages that obstruct the discovery of general laws with the universal language of mathematics. Elon Musk takes Lucas’s interacting robots one step further, with his ambition to link the human brain directly to the world (which includes other human brains). Our thoughts will be directly socialized without the intermediation of any language. When you think “door, open!” it does. Whereas economists dream of putting God in their models, the robotic utopians dream of reversing the fall of man by creating godlike humans.
To be clear, this is the apotheosis of a Western conceit. The West still views itself as the bearer of universal civilization, with the non-West no more than a lagging cultural indicator. In the West itself, the authority of economics has diminished, but this hasn’t dented the West’s propensity to export its civilization. “Good economics” has been partly replaced by a commitment to universal human rights as the means to save the world from itself, but the purpose is the same: to lecture everyone else on their shortcomings.
Here, we encounter a paradox. The triumph of universalism has come just when Western power is collapsing. And it was that power which made Western thought seem universal in the first place. Conquest, not missionaries, spread Christianity around the world.
The same is true of Western social science and Western values in general. The non-West bought into the Western model of progress, especially economic progress, because it wanted to free itself from Western tutelage. This still gives economics (a Western invention) its edge. It’s a kind of white man’s magic. But without the power and authority behind the magic, its appeal is bound to fade. The non-West will still want to emulate the West’s success, but will pursue it by its own means. The University of Chicago and MIT will give way to universities in China or India, and the non-West will choose which Western values to embrace.
Yet the world needs something universal to give us a sense of shared humanity. The big challenge – to use that overworked word – is to develop what the philosopher Thomas Nagel called a “view from nowhere” that transcends both cultural fetishism and scientism, and does not force us to choose between them. This is a task for philosophy, not economics.
The Puzzle of Economic Progress
CAMBRIDGE – Do we know how economies develop? Obviously not, it seems, or otherwise every country would be doing better than it currently is in these low-growth times. In fact, cases of sustained rapid growth, like Japan beginning in the 1960s, or other Southeast Asian countries a decade later, are so rare that they are often described as “economic miracles.”
Yet when Patrick Collison of software infrastructure company Stripe and Tyler Cowen of George Mason University recently wrote an article in The Atlanticcalling for a bold new interdisciplinary “science of progress,” they stirred up a flurry of righteous indignation among academics.
Many pointed to the vast amount of academic and applied research that already addresses what Collison and Cowen propose to include in a new discipline of “Progress Studies.” Today, armies of economists are researching issues such as what explains the location of technology clusters like Silicon Valley, why the Industrial Revolution happened when it did, or why some organizations are much more productive and innovative than others. As the University of Oxford’s Gina Neff recently remarked on Twitter, the Industrial Revolution even gave birth to sociology, or what she called “Progress Studies 1.0.”
This is all true, and yet Collison and Cowen are on to something. Academic researchers clearly find it hard to work together across disciplinary boundaries, despite repeated calls for them to do so more often. This is largely the result of incentives that encourage academics to specialize in ever-narrower areas, so that they can produce the publications that will lead to promotion and professional esteem. The world has problems, as the old saying puts it, but universities have departments. Interdisciplinary research institutes like mine and Neff’s therefore have to consider carefully how best to advance the careers of younger colleagues. The same silo problem arises in government, which is likewise organized by departments.
Moreover, fashions in research can lead to hugely disproportionate intellectual efforts in specific areas. To take one example, the ethics of artificial intelligence is clearly an important subject, but is it really the dominant research challenge today, even in the fields of AI or ethics? The financial incentives embedded in technology companies’ business models seem to me at least as important as morality in explaining these firms’ behavior.
At the same time, some important economic questions are curiously underexplored. For example, in his recent book The Technology Trap, Carl Freyexpands on his gloomy view of what automation will mean for the jobs of the future, pointing to the adverse effects that the original Industrial Revolution had on the typical worker. Yet Frey also notes that a later period of automation, the era of mass production in the mid-twentieth century, was one of high employment and increasingly broad-based prosperity. What explains the great difference between those two eras?
More generally, researchers need to distill their findings in an accessible way for policymakers – particularly when there are significant scholarly disagreements – and persuade decision-makers to act on them. Yet although the public broadly trusts academic research, most academics are poor communicators (which again reflects their professional incentives). Besides, the last thing some politicians want is evidence that disproves a dearly held belief. And even open-minded officials often struggle to find easily digestible academic expertise on the state of knowledge, particularly on questions concerning novel science and technology.
Today, the role of research in changing behavior – whether that of government officials or of businesses and citizens – is part of the broader crisis of legitimacy in Western democracies. By the early 2000s, technocrats – and economists in particular – ruled the roost, and governments delegated large swaths of policy to independent expert bodies such as central banks and utility regulators. But then came the 2008 global financial crisis. With real incomes stagnating for many, and “deaths of despair” increasing, it is not surprising that expertise has lost its luster for much of the public.
This leads to a final point about the need for a science of progress: what do we actually mean by “progress”? How should it be measured and monitored, and who experiences it? For many reasons, the standard indicator of real GDP growth, which leaves out much of what people value, will no longer do.
The debate about progress therefore raises profound political and philosophical questions about the kind of societies we want. If the global economy falls into recession, as now seems likely, then social divisions and political polarization will intensify further. And the clear message since the turn of the millennium is that if most people do not experience progress, then society isn’t really progressing at all.
Current academic research – into the impact of new technologies, the economics of innovation, and the quality of management, for example – may be providing ever more pieces of the puzzle. But many crucial questions about economic progress remain unanswered, and others have not yet even been properly posed.
Don’t Blame Economics, Blame Public Policy
AMMAN – It is now customary to blame economics or economists for many of the world’s ills. Critics hold economic theories responsible for rising inequality, a dearth of good jobs, financial fragility, and low growth, among other things. But although criticism may spur economists to greater efforts, the concentrated onslaught against the profession has unintentionally diverted attention from a discipline that should shoulder more of the blame: public policy.
Economics and public policy are closely related, but they are not the same, and should not be seen as such. Economics is to public policy what physics is to engineering, or biology to medicine. While physics is fundamental to the design of rockets that can use energy to defy gravity, Isaac Newton was not responsible for the Challenger space shuttle disaster. Nor was biochemistry to blame for Michael Jackson’s death.
Physics, biology, and economics, as sciences, answer questions about the nature of the world we inhabit, generating what economic historian Joel Mokyr of Northwestern University calls propositional knowledge. Engineering, medicine, and public policy, on the other hand, answer questions about how to change the world in particular ways, leading to what Mokyr terms prescriptive knowledge.
Although engineering schools teach physics and medical schools teach biology, these professional disciplines have grown separate from their underlying sciences in many respects. In fact, by developing their own criteria of excellence, curricula, journals, and career paths, engineering and medicine have become distinct species.
Public-policy schools, by contrast, have not undergone an equivalent transformation. Many of them do not even hire their own faculty, but instead use professors from foundational sciences such as economics, psychology, sociology, or political science. The public-policy school at my own university, Harvard, does have a large faculty of its own – but it mostly recruits freshly minted PhDs in the foundational sciences, and promotes them on the basis of their publications in the leading journals of those sciences, not in public policy.
Policy experience before achieving professorial tenure is discouraged and rare. And even tenured faculty have surprisingly limited engagement with the world, owing to prevailing hiring practices and a fear that engaging externally might entail reputational risks for the university. To compensate for this, public-policy schools hire professors of practice, such as me, who have acquired prior policy experience elsewhere.
Teaching-wise, you might think that public-policy schools would adopt a similar approach to medical schools. After all, both doctors and public-policy specialists are called upon to solve problems and need to diagnose the respective causes. They also need to understand the set of possible solutions and figure out the pros and cons of each. Finally, they need to know how to implement their proposed solution and evaluate whether it is working.
Yet most public-policy schools offer only one- or two-year master’s programs, and have a small PhD program with a structure typically similar to that in the sciences. That compares unfavorably with the way medical schools train doctors and advance their discipline.
Medical schools (at least in the United States) admit students after they have finished a four-year college program in which they have taken a minimum set of relevant courses. Medical students then undergo a two-year program of mostly in-class teaching, followed by two years in which they are rotated across different departments in so-called teaching hospitals, where they learn how things are done in practice by accompanying attending (or senior) doctors and their teams.
At the end of the four years, young doctors receive a diploma. But then they must start a three- to nine-year residency (depending on the specialty) in a teaching hospital, where they accompany senior doctors but are given increasing responsibilities. After seven to 13 years of postgraduate studies, they finally are permitted to practice as doctors without supervision, although some do additional supervised fellowships in specialized areas.
By contrast, public-policy schools essentially stop teaching students after their first two years of mostly in-class education, and (aside from PhD programs) do not offer the many additional years of training that medical schools provide. Yet the teaching-hospital model could be effective in public policy, too.
Consider, for example, Harvard University’s Growth Lab, which I founded in 2006 after two highly fulfilling policy engagements in El Salvador and South Africa. Since then, we have worked on over three dozen countries and regions. In some respects, the Lab looks a bit like a teaching and research hospital. It focuses both on research and on the clinical work of serving “patients,” or governments in our case. Moreover, we recruit recent PhD graduates (equivalent to freshly minted MDs) and graduates of master’s programs (like medical students after their first two years of school). We also hire college graduates as research assistants, or “nurses.”
In addressing the problems of our “patients,” the Lab develops new diagnostic tools to identify both the nature of the constraints they face and therapeutic methods to overcome them. And we work alongside governments to implement the proposed changes. That is actually where we learn the most. In that way, we ensure that theory informs practice, and that insights gained from practice inform our future research.
Governments tend to trust the Lab, because we do not have a profit motive, but rather just a desire to learn with them by helping them solve their problems. Our “residents” stay with us for three to nine years, as in a medical school, and often take up senior positions in their own countries’ governments after they leave. Instead of using our acquired experience to create “intellectual property,” we give it away through publications, online tools, and courses. Our reward is others adopting our methods.
This structure was not planned: it just emerged. It was not promoted from the top, but was simply allowed to evolve. However, if the idea of these “teaching hospitals” was embraced, it could radically change the way public policy is advanced, taught, and put at the service of the world. Maybe people would then stop blaming economists for things that never should have been their responsibility in the first place.