Risk, Uncertainty, and Macroeconomic Policy
There is a distinction between risk and uncertainty:
Uncertainty is a measure of our ignorance. Risk is what remains when we know everything that can be known.
Edmund Phelps says this distinction helps to explain why monetary policy rules and financial engineering based upon the neoclassical model, which relies upon the mistaken idea that we face known, quantifiable risks, are not holding up very well in the current environment where there is uncertainty about the nature of the risks that market participants face. Given this, what would he do different? Instead of lowering interest rates to combat unemployment, accept that the natural rate of unemployment has increased recently, and raise interest rates to prevent inflation:
Our Uncertain Economy, by Edmund Phelps, Commentary, NY Times: In recent times, most economists have pretended that the economy is essentially predictable and understandable. Economic decision- and policy-making in the private and public sectors, the thinking goes, can be reduced to a science. Today we are seeing consequences of this conceit in the financial industries and central banking. "Financial engineering" and "rule-based" monetary policy, by considering uncertain knowledge to be certain knowledge, are taking us in a hazardous direction.
Predictability was not always the economic fashion. In the 1920s, Frank Knight at the University of Chicago viewed the capitalist economy as shot through with "unmeasurable" risks, which he called "uncertainty." John Maynard Keynes wrote of the consequences of Knightian uncertainty for rational action.
Friedrich Hayek began a movement to bring key points of uncertainty theory into the macroeconomics of employment -- a modernist movement later resumed when Milton Friedman and I started the "micro foundations of macro" in the 1960s.
In the 1970s, though, a new school of neo-neoclassical economists proposed that the market economy, though noisy, was basically predictable. All the risks in the economy, it was claimed, are driven by purely random shocks -- like coin throws -- subject to known probabilities, and not by innovations whose uncertain effects cannot be predicted.
This model took hold in American economics and soon practitioners sought to apply it. Quantitative finance theory became a tool relied on by most banks and hedge funds. Policy rules based on this model were adopted at the Federal Reserve and other central banks.
The neo-neoclassicals claimed big benefits from these changes. They boasted that their statistical approach to risk made the financial sector much more effective in matching lenders with borrowers, with vast savings in labor and increases in profits. They asserted a decline in "volatility" in the U.S. economy and credited it to the monetary policy rules at the Fed.
Current experience is putting these claims to the test.
Subprime lending and the securitization of debt was an innovation that, it was believed, offered the prospect of increasing homeownership. But "risk management" was out of its depth here: It had no past data from which to estimate needed valuations on the novel assets, it did not allow for possible macroeconomic dynamics, and it took inadequate account of the system effects of unknown numbers of entrants into the new business all at nearly the same.
The claim for rule-based monetary policy is weak on its face. In deciding on the short-term interest rate it controls (the Fed funds rate) the Federal Reserve thinks about the "natural" interest rate -- the rate needed if inflation is neither to rise nor fall. Then the Fed asks whether the expected inflation rate is above or below the target. The Fed also asks whether the unemployment rate is above or below the medium-run "natural" unemployment level -- the level to which sooner or later the actual rate will return. ... [...continue reading...]
Posted by Mark Thoma on Friday, March 14, 2008 at 12:20 AM in Economics, Macroeconomics, Monetary Policy |
You can follow this conversation by subscribing to the comment feed for this post.