Seal_of_the_United_States_Federal_Reserve_System, extensions

Models, Markets, and Monetary Policy

It is an honor and a privilege to participate once again in this annual Hoover Institution Monetary Policy Conference. The topic of this year’s conference, “Strategies for Monetary Policy,” is especially timely. As you know, the Federal Reserve System is conducting a review of the strategy, tools, and communication practices we deploy to pursue our dual-mandate goals of maximum employment and price stability. In this review, we expect to benefit from the insights and perspectives that are presented today, as well as those offered at other conferences devoted to this topic, as we assess possible practical ways in which we might refine our existing monetary policy framework to better achieve our dual-mandate goals on a sustained basis.

My talk today will not, however, be devoted to a broad review of the Fed’s monetary policy framework—that process is ongoing, and I would not want to prejudge the outcome—but it will instead focus on some of the important ways in which economic models and financial market signals help me think about conducting monetary policy in practice after a career of thinking about it in theory.1

Richard H. Clarida

The Role of Monetary Policy
Let me set the scene with a very brief—and certainly selective—review of the evolution over the past several decades of professional thinking about monetary policy. I will begin with Milton Friedman’s landmark 1967 American Economic Association presidential address, “The Role of Monetary Policy.”2 This article is, of course, most famous for its message that there is no long-run, exploitable tradeoff between inflation and unemployment. And in this paper, Friedman introduced the concept of the “natural rate of unemployment,” which today we call u*.3 What is less widely appreciated is that Friedman’s article also contains a concise but insightful discussion of Wicksell’s “natural rate of interest”—r* in today’s terminology—the real interest rate consistent with price stability. But while u* and r* provide key reference points in Friedman’s framework for assessing how far an economy may be from its long-run equilibrium in labor and financial markets, they play absolutely no role in the monetary policy rule he advocates: his well-known k-percent rule that central banks should aim for and deliver a constant rate of growth of a monetary aggregate. This simple rule, he believed, could deliver long-run price stability without requiring the central bank to take a stand on, model, or estimate either r* or u*. Although he acknowledged that shocks would push u away from u* (and, implicitly, r away from r*), Friedman felt the role of monetary policy was to operate with a simple quantity rule that did not itself introduce potential instability into the process by which an economy on its own would converge to u* and r*.4 In Friedman’s policy framework, u* and r* are economic destinations, not policy rule inputs.

Of course, I do not need to elaborate for this audience that the history of k-percent rules is that they were rarely tried, and when they were tried in the 1970s and the 1980s, they were found to work much better in theory than in practice.5 Velocity relationships proved to be empirically unstable, and there was often only a very loose connection between the growth rate of the monetary base—which the central bank could control—and the growth rate of the broader monetary aggregates, which are more tightly linked to economic activity. Moreover, the macroeconomic priority in the 1980s in the United States, the United Kingdom, and other major countries was to do “whatever it takes” to break the back of inflation and to restore the credibility squandered by central banks that had been unable or unwilling to provide a nominal anchor after the collapse of the Bretton Woods system.

By the early 1990s, the back of inflation had been broken (thank you, Paul Volcker), conditions for price stability had been achieved (thank you, Alan Greenspan), and the time was right for something to fill the vacuum in central bank practice left by the realization that monetary aggregate targeting was not, in practice, a workable monetary policy framework. Although it was mostly unspoken, there was a growing sense at the time that a simple, systematic framework for central bank practice was needed to ensure that the hard-won gains from breaking the back of inflation were not given away by short-sighted, discretionary monetary experiments that were poorly executed, such as had been the case in the 1970s.

Policy Rate Rules
That vacuum, of course, was filled by John Taylor in his classic 1993 paper, “Discretion vs. Policy Rules in Practice.” Again, for this audience, I will not need to remind you of the enormous impact this single paper had not only on the field of monetary economics, but also—and more importantly—on the practice of monetary policy. For our purposes today, I will note that the crucial insight of John’s paper was that, whereas a central bank could pick the “k” in a “k-percent” rule on its own, without any reference to the underlying parameters of the economy (including r* and u*), a well-designed rule for setting a short-term interest rate as a policy instrument should, John argued, respect several requirements.6 First, the rule should anchor the nominal policy rate at a level equal to the sum of its estimate of the neutral real interest rate (r*) and the inflation target. Second, to achieve this nominal anchor, the central bank should be prepared to raise the nominal policy rate by more than one-for-one when inflation exceeds target (the Taylor principle). And, third, the central bank should lean against the wind when output—or, via an Okun’s law relationship, the unemployment rate—deviates from its estimate of potential (u*).

In other words, whereas in Friedman’s k-percent policy rule u* and r* are destinations irrelevant to the choice of k, in the Taylor rule—and most subsequent Taylor-type rules—u* and r* are necessary inputs. As Woodford (2003) demonstrates theoretically, the first two requirements for a Taylor-type rule are necessary for it to be consistent with the objective of price stability. The third requirement—that monetary policy lean against the wind in response to an output or unemployment gap—not only contributes to the objective of price stability, but is also obviously desirable from the perspective of a central bank like the Fed that has a dual mandate. The Taylor approach to instrument-rule specification has been found to produce good macroeconomic outcomes across a wide range of macroeconomic models. Moreover, in a broad class of both closed and open economy dynamic stochastic general equilibrium, or DSGE, models, Taylor-type rules can be shown to be optimal given the underlying micro foundations of these models.

In original formulations of Taylor-type rules, r* was treated as constant and set equal to 2 percent, and potential output was set equal to the Congressional Budget Office (CBO) estimates of potential output, or, in specifications using the unemployment rate as the activity variable, u* was set equal to the CBO’s estimate of the natural unemployment rate. These assumptions were reasonable at the time, and I myself wrote a number of papers with coauthors in the years before the Global Financial Crisis that incorporated them.7

A Dive into Data Dependence
Fast-forward to today. At each Federal Open Market Committee (FOMC) meeting, my colleagues and I consult potential policy rate paths implied by a number of policy rules, as we assess what adjustments, if any, may be required for the stance of monetary policy to achieve and maintain our dual-mandate objectives.8 A presentation and discussion of several of these rules has been included in the semiannual Monetary Policy Report to the Congress since July 2017.9 One thing I have come to appreciate is that, as I assess the benefits and costs of alternative policy scenarios based on a set of policy rules and economic projections, it is important to recognize up front that key inputs to this assessment, including u* and r*, are unobservable and must be inferred from data via models.10 I would now like to discuss how I incorporate such considerations into thinking about how to choose among monetary policy alternatives.

A monetary policy strategy must find a way to combine incoming data and a model of the economy with a healthy dose of judgment—and humility!—to formulate, and then communicate, a path for the policy rate most consistent with the central bank’s objectives. There are two distinct ways in which I think that the path for the federal funds rate should be data dependent.11 Monetary policy should be data dependent in the sense that incoming data reveal at any point in time where the economy is relative to the ultimate objectives of price stability and maximum employment. This information on where the economy is relative to the goals of monetary policy is an important input into interest rate feedback rules—after all, they have to feed back on something. Data dependence in this sense is well understood, as it is of the type implied by a large family of policy rules, including Taylor-type rules discussed earlier, in which the parameters of the economy needed to formulate such rules are taken as known.

But, of course, key parameters needed to formulate such rules, including u* and r*, are unknown. As a result, in the real world, monetary policy should be—and in the United States, I believe, is—data dependent in a second sense: Policymakers should and do study incoming data and use models to extract signals that enable them to update and improve estimates of r* and u*. As indicated in the Summary of Economic Projections, FOMC participants have, over the past seven years, repeatedly revised down their estimates of both u* and r* as unemployment fell and real interest rates remained well below prior estimates of neutral without the rise in inflation those earlier estimates would have predicted (figures 1 and 2). And these revisions to u* and r* appeared to have had an important influence on the path for the policy rate actually implemented in recent years. One could interpret any changes in the conduct of policy as a shift in the central bank’s reaction function. But in my view, when such changes result from revised estimates of u* or r*, they merely reflect an updating of an existing reaction function.

In addition to u* and r*, another important input into any monetary policy assessment is the state of inflation expectations. Since the late 1990s, inflation expectations appear to have been stable and are often said to be “well anchored.” However, inflation expectations are not directly observable; they must be inferred from models, other macroeconomic information, market prices, and surveys. Longer-term inflation expectations that are anchored materially above or below the 2 percent inflation objective present a risk to price stability. For this reason, policymakers should and do study incoming data to extract signals that can be used to update and improve estimates of expected inflation. In many theoretical rational expectations models, expected inflation is anchored at the target level by assumption. From a risk-management perspective, it makes sense, I believe, to regularly test this assumption against empirical evidence.

Financial Markets and Monetary Policy—Extracting Signal from Noise
Because the true model of the economy is unknown, either because the structure is unknown or because the parameters of a known structure are evolving, I believe policymakers should consult a number and variety of sources of information about neutral real interest rates and expected inflation, to name just two key macroeconomic variables. Because macroeconomic models of r* and long-term inflation expectations are potentially misspecified, seeking out other sources of information that are not derived from the same models can be especially useful. To be sure, financial market signals are inevitably noisy, and day-to-day movements in asset prices are unlikely to tell us much about the cyclical or structural position of the economy.12 However, persistent shifts in financial market conditions can be informative, and signals derived from financial market data—along with surveys of households, firms, and market participants, data, as well as outside forecasts—can be an important complement to estimates obtained from historically estimated and calibrated macroeconomic models.13

Interest rate futures and interest rate swaps markets provide one source of high-frequency information about the path and destination for the federal funds rate expected by market participants (figure 3). Interest rate option markets, under certain assumptions, can offer insights about the entire ex ante probability distribution of policy rate outcomes for calendar dates near or far into the future (figure 4). And, indeed, when one reads that a future policy decision by the Fed or any central bank is “fully priced in,” this is usually based on a “straight read” of futures and options prices. But these signals from interest rate derivatives markets are only a pure measure of the expected policy rate path under the assumption of a zero risk premium. For this reason, it is useful to compare policy rate paths derived from market prices with the path obtained from surveys of market participants, which, while subject to measurement error, should not be contaminated with a term premium. Market- and survey-based estimates of the policy rate path are often highly correlated. But when there is a divergence between the path or destination for the policy rate implied by the surveys and a straight read of interest rate derivatives prices, I place at least as much weight on the survey evidence (for example, derived from the surveys of primary dealers and market participants conducted by the Federal Reserve Bank of New York) as I do on the estimates obtained from market prices (figure 3).

The Treasury yield curve can provide another source of information about the expected path and ultimate longer-run destination of the policy rate. But, again, the yield curve, like the interest rate futures strip, reflects not only expectations of the path of short-term interest rates, but also liquidity and term premium factors Thus, to extract signal about policy from noise in the yield curve, a term structure model is required. But different term structure models can and do produce different estimates of the expected path for policy and thus the term premium. Moreover, fluctuations in the term premium on U.S. Treasury yields are driven in part by a significant “global” factor, which complicates efforts to treat the slope of the yield curve as a sufficient statistic for the expected path of U.S. monetary policy (Clarida, 2018c). Again, here, surveys of market participants can provide useful information—for example, about “the expected average federal funds rate over the next 10 years,” which provides an alternative way to identify the term premium component in the U.S. Treasury curve.

Quotes from the Treasury Inflation-Protected Securities (TIPS) market can provide valuable information about two key inputs to monetary policy analysis: long-run r* and expected inflation.14Direct reads of TIPS spot rates and forward rates are signals of the levels of real interest rates that investors expect at various horizons, and they can be used to complement model-based estimates of r*. In addition, TIPS market data, together with nominal Treasury yields, can be used to construct measures of “breakeven inflation” or inflation compensation that provide a noisy signal of market expectations of future inflation. But, again, a straight read of breakeven inflation needs to be augmented with a model to filter out the liquidity and risk premium components that place a wedge between inflation compensation and expected inflation.

As is the case with the yield curve and interest rate futures, it is useful to compare estimates of expected inflation derived from breakeven inflation data with estimates of expected inflation obtained from surveys—for example, the expected inflation over the next 5 to 10 years from the University of Michigan Surveys of Consumers (figure 5). Market- and survey-based estimates of expected inflation are correlated, but, again, when there is a divergence between the two, I place at least as much weight on the survey evidence as on the market-derived estimates.

The examples I have mentioned illustrate the important point that, in practice, there is not typically a clean distinction between “model-based” and “market-based” inference of key economic variables such as r* and expected inflation. The reason is that market prices reflect not only market expectations, but also risk and liquidity premiums that need to be filtered out to recover the object of interest—for example, expected inflation or long-run r*. This filtering almost always requires a model of some sort, so even market-based estimates of key inputs to monetary policy are, to some extent, model dependent.

Implications for Monetary Policy
Let me now draw together some implications of the approach to models, markets, and monetary policy I have laid out in these remarks. Macroeconomic models are, of course, an essential tool for monetary policy analysis, but the structure of the economy evolves, and the policy framework must be—and I believe, at the Federal Reserve, is—nimble enough to respect this evolution. While financial market signals can and sometimes do provide a reality check on the predictions of “a model gone astray,” market prices are, at best, noisy signals of the macroeconomic variables of interest, and the process of filtering out the noise itself requires a model—and good judgment. Survey estimates of the long-run destination for key monetary policy inputs can—and, at the Fed, do—complement the predictions from macro models and market prices (figure 6).15 Yes, the Fed’s job would be (much) easier if the real world of 2019 satisfied the requirements to run Friedman’s k-percent policy rule, but it does not and has not for at least 50 years, and our policy framework must and does reflect this reality.

This reality includes the fact that the U.S. economy is in a very good place. The unemployment rate is at a 50-year low, real wages are rising in line with productivity, inflationary pressures are muted, and expected inflation is stable. Moreover, the federal funds rate is now in the range of estimates of its longer-run neutral level, and the unemployment rate is not far below many estimates of u*. Plugging these estimates into a 1993 Taylor rule produces a federal funds rate very close to our current target range for the policy rate.16 So with the economy operating at or very close to the Fed’s dual-mandate objectives and with the policy rate in the range of FOMC participants’ estimates of neutral, we can, I believe, afford to be data dependent—in both senses of the term as I have discussed—as we assess what, if any, further adjustments in our policy stance might be required to maintain our dual-mandate objectives of maximum employment and price stability.