Monetary policy

Error message

Deprecated function: The each() function is deprecated. This message will be suppressed on further calls in _menu_load_objects() (line 579 of /var/www/drupal-7.x/includes/menu.inc).

Central banks and history: a troubled relationship

Published by Anonymous (not verified) on Thu, 16/09/2021 - 6:00pm in

Barry Eichengreen

The Bank of England co-organised a ‘History and Policy Making Conference‘ in late 2020. This guest post by Barry Eichengreen, Professor of Economics and Political Science at the University of California Berkley, is based on material included in his keynote address at the conference.

Learning from history is hard. At central banks, it can be hard to draw policymakers’ attention to historical evidence. Even when historical analogies are at the forefront of their minds, the right analogies are not always applied in the right way. In fact, over-reliance on a small number of compelling historical case studies can lead to suboptimal decisions. Policymakers therefore need access to a wide portfolio of analogies. They must also cultivate an historical sensibility that is suspicious of simplification and alert to the differences – as well as the similarities – between ‘now’ and ‘then’.

The Bank of England is at the forefront of the movement to foster historical research on monetary and financial issues. It welcomes researchers to its archives, where, as part of a broader change in British records policy, they can consult materials from as recently as 20 years ago. It has commissioned histories of the Bank, of which Harold James’ ‘Making a Modern Central Bank‘ is the latest. Staff and academics have collaborated on the development of historical time series for the British economy; the fruits of their efforts are available on the Bank’s website. The Bank has researchers on staff who utilise historical evidence and methods when studying monetary and financial topics such as the operation of international banking and the incidence of credit controls.

Curiously, however, if one searches for mentions of history in the ‘Minutes of the Monetary Policy Committee’, one finds few. These tend to be along the lines of ‘The business-inventories-to-sales-ratio has fallen to an historically low level’, or ‘Inflation expectations have picked up to above their historical average’. These are references to very recent history. I looked closely at the minutes from 2008–09, during the Global Financial Crisis, when one might have expected to find references to earlier financial crises, phenomena with which the Bank as an institution has ample experience; interestingly, I didn’t find them. This is in contrast to the minutes of the Federal Open Market Committee of the U.S. Federal Reserve, where one finds multiple references to the Great Depression.

How then should we understand the Bank of England’s investment in history? One answer is that this attention to history is part of the Bank’s communications strategy. An effective monetary policy strategy requires communicating the central bank’s objectives, strategies and decision-making processes to financial market participants, politicians and the public. Central bank independence is viable only if accompanied by adequate accountability; and communicating its thinking allows the central bank to be held accountable in the court of public opinion. Describing how objectives, strategies and decision-making processes have been re-shaped by events helps outsiders to comprehend the central bank’s priorities. This includes explaining how the Bank’s current objectives and approaches differ from those of the past. This conveys a sense of how the Bank as an institution has evolved and how its actions are affected by the context in which it operates. 

But is history useful to the Bank of England beyond its contribution to communications? One answer is that research is an important function of a central bank, and knowledge of economic history makes for better research. The research function is richer and better informs policy insofar as it employs scholars with knowledge of earlier economic, monetary and financial events, and researchers with a historical sensibility. 

Generating historical information and analysis is one thing; how exactly policymakers should use it is another. Most obviously, there is the temptation to use history for analogical reasoning. As I have argued elsewhere, analogies are an instinctually appealing mode of reasoning for humans. There is a literature in cognitive science where it is argued that analogical reasoning is a central to human thinking. The use of analogy appears to develop spontaneously at a very young age among members of the human species. Most other primate species, in contrast, can’t be taught to recognise relational patterns, or can be so taught only with great difficulty. 

History, of course, is a rich source of analogies. Political scientists such as Richard Neustadt and Ernest May argue that policymakers resort to analogies especially during crises, when there is no time for deductive reasoning (for formal modeling of heretofore un-modelled circumstances). They point to the importance of the Munich analogy in President Harry Truman’s decision to go to war in South Korea, or to how the Battle of Dien Bien Phu informed John F. Kennedy’s decision to escalate the war in Vietnam. One of the leading books in this subfield is entitled, revealingly, ‘Analogies at War‘.

This observation leads to what might be called ‘the analogy problematique’. Analogical reasoning, like other forms of reasoning, is subject to misuse. Analogies can be invoked without testing them sufficiently – that is, without verifying the existence of similarities along the relevant dimensions. Decision makers rarely select the fittest candidate or candidates from a portfolio of analogies, tending to focus instead on one. They are excessively influenced by what Philip Zelikow refers to as ‘the searing analogy‘. For central bankers, more often than not, this is the Great Depression. As Ben Bernanke famously acknowledged at Milton Friedman’s 90th birthday celebration, ‘Regarding the Great Depression: you’re right, we did it. We’re very sorry. Thanks to you, we won’t do it again’. The problem being that searing analogies can divert attention from more relevant episodes and thus from important aspects of the current situation. They can serve as a set of blinders, in other words, as well as a set of spectacles (as argued in Eichengreen (2015)).

I am reminded of this each time an economic and financial crisis erupts, most recently in March of 2020, when I get calls from reporters. Typically, the questioner starts: ‘How does this crisis compare to the Great Depression?’ To which I typically answer: ‘It doesn’t compare’.  For example, if we want to contrast the Great Depression with the onset of the Covid-19 recession, then the starting point should be that the Depression was first and foremost a shock to aggregate demand, whereas the Covid recession started with a shock to aggregate supply. Just because output fell in both cases doesn’t mean that there is a useful analogy.

At which point my interlocutor responds, ‘OK, but if not the Great Depression, then what other episode is analogous?’ To which the relevant response might be: ‘Maybe there is no good analogy’. This response would be the appropriate if Covid-19 is unprecedented in important respects. 

These observations point to a slightly different use of history. History can be used to focus attention on what is distinctive about current circumstances. What is relevant is precisely what is not analogous with earlier historical episodes. History is useful for understanding how ‘this time is different’, to invoke the title of a well-known book. But accomplishing this requires marshaling a portfolio of historical analogies, and not just one, to return to an earlier point. It requires understanding what is distinctive about each episode. That’s where a historical sensibility, acquired through training or long experience, comes in.

Yet another way in which history can be useful for policy is as a parable. A parable is a simple didactic story that illustrates one or more instructive lessons or principles. Catherine Schenk has described how Fed Chair William McChesney Martin, in a controversial 1965 speech, invoked the 1920s as a parable illustrating the point that economic and financial stability should not be taken for granted. It is not clear that Martin took the analogy between the 1920s and 1960s seriously; there is no evidence that he had read his Friedman and Schwartz. But he saw a speech highlighting the similarities as useful for making his point.

Unfortunately, while parables are simple by construction, history is complex – as historians will be quick to tell you. This raises to danger that the parable does violence to the history by stripping away essential complexity. A parable can be useful because it conveys an important point. But it can mislead if this distillation crowds out deeper understanding of the actual event. 

At this point, economic history becomes a source of ‘stylized facts’. This is a term that my dissertation advisor, Bill Parker, actively despised. He never tired of reminding students that, for an historian, there is no such thing as a stylized fact. (Realising this is what Parker meant by possessing a historical sensibility.)  Fortunately, there is no shortage of prickly economic historians to prevent history from being distilled in this way.

Barry Eichengreen is Professor of Economics and Political Science at the University of California Berkley.

If you want to get in touch, please email us at bankunderground@bankofengland.co.uk or leave a comment below.

Comments will only appear once approved by a moderator, and are only published where a full name is supplied. Bank Underground is a blog for Bank of England staff to share views that challenge – or support – prevailing policy orthodoxies. The views expressed here are those of the authors, and are not necessarily those of the Bank of England, or its policy committees.

Labor Day Reflections: Growth Doesn’t Solve Inequality

Published by Anonymous (not verified) on Fri, 10/09/2021 - 11:25pm in
by Taylor Lange

Labor Day, like other holidays of remembrance, is an opportunity to reflect on the past and critically consider the future. Our memory ought to include the foot soldiers of the labor movement, from the 10,000 coal miners who fought in the Battle of Blair Mountain to the steel workers who duked it out with the Pinkertons at Homestead mill. We owe our rights as workers to the bitter struggles of many who preceded us.

Despite the gains of the labor movement, it seems we still have a long way to go. It is well-documented that while the productivity of the American worker has continued to rise, our compensation has risen slower. All that extra surplus has gone right into the bosses’ pockets; Mother Jones would be up in arms by now.

One result of the asymmetric rise in wages and productivity has been worsening income inequality. Some would argue that economic growth will fix the problem. After all, a rising tide lifts all boats, right? Brian Snyder didn’t think so, and neither do I based on the following findings.

Revisiting Kuznets’ Curve

Graph of Kuznets' Curve

Figure 1. The Kuznets curve is a graphical representation of Simon Kuznets’ theory on inequality and economic growth.

The earliest and most influential economist to write about economic growth and income inequality was Simon Kuznets. He compared income data from developing and developed countries from the 1920s through the early 1950s, and found that the poorest 10-20 percent of households in the developed countries were earning a growing share of GDP while the top 10-20 percent were earning less. In developing countries, the reverse occurred; as these countries grew in terms of GDP (and developed in terms of GDP per capita), inequality worsened.

This stark contrast between developed and developing countries led Kuznets to speculate on the relationships between growth, development, and income inequality. He theorized that as a country grew and developed initially, its workforce would switch from agriculture to industry and rural to urban. The cost of migration and learning new skills would cause the laboring class to lose wealth and income while the capital class benefited from the increased output. Afterward, as labor became more skilled, laborers’ share of income would increase. In other words, Kuznets thought that economic growth would worsen income inequality initially, but ultimately lessen it.[i] Graphically, this appears as an inverted “U”—hence its nickname, the “Kuznets curve.”

Kuznets Versus Modern Data

While Kuznets’ analyses were groundbreaking for the time, the times have changed. When we consider the same countries Kuznets did, we discover that his initial findings no longer reflect today’s trends.


Figure 2. Inequality vs. GDP per capita. Inequality is the ratio of income received by the top 10 percent of earners to income received by the bottom 50 percent. Per capita GDP data are from 1960-2018. (Given the growth of per capita GDP during that period, the X axis is also a proxy for time.)

Data from the last 60 years provide insights on how the economy (in per capita terms) and the income share of the wealthiest Americans have grown.[ii] Towards the lower end of GDP per capita, we see that growth accompanies decreased inequality. Those data points happen to come shortly after Kuznets’ famous findings. The trend does not last, though, as inequality starts to increase again around 10,000 dollars of per-capita GDP, and per-capita growth thereafter benefits the wealthy disproportionately.

Keen observers would note that inequality seems to be leveling off, suggesting the USA may be at another turning point like the one predicted by Kuznets. While this is plausible, several supposed turning points appear along the continuum of GDP per-capita growth, and are consistently followed by subsequent increases in inequality. How many bobs at the apple should the late Kuznets get? And, if we have a number of mini-Kuznets curves over time or at different levels of income, is that really a Kuznets curve at all?

While it’s possible that a few more years’ worth of data could see a true decrease in top earners’ shares, it’s likely that deliberate policy choices are mostly to blame for the expanding wealth of the rich.

Policy Arenas and Inequality

So, what are these policies? An extensive examination by the Economic Policy Institute (EPI) sheds some light on many of the major policies that have eroded American labor including:

  1. Trade deficits and outsourcing — Multinational companies and free trade agreements undercut the wages and job security of non-college educated workers in favor of protecting profits. Sending jobs overseas forced the USA to import more, resulting in a trade deficit that was disastrous for U.S. manufacturing.
  2. Erosion of unions and the right to organize — Anti-union sentiments have allowed for corporate attacks on workers’ rights, and a recent Supreme Court decision has been the latest in a series that have eroded workers’ ability to collectively bargain.
  3. Stagnation of the federal minimum wage — The last time the federal minimum wage went up was in 2009; we’re in the longest period without a raise since the minimum wage’s inception. Adjusting for inflation, the minimum of $7.25 is worth 16.5 percent less than it was at inception, which means you can buy even less with it.
  4. Lack of enforcement of wage theft laws — Wage theft occurs when workers are not paid for hours worked, or employers confiscate tips from their employees, among others. A study done by the EPI indicates that workers in ten U.S. states lose an average of 3,300 dollars per worker per year to wage theft.

All of these policies and oversights are symptomatic of a growth-driven mindset aimed at increasing consumption and output. These types of policies echo the adverse working conditions and standards that ignited the American labor movement and should be met with the determined opposition of Samuel Gompers himself!

A Just and Equitable Steady State

Addressing income inequality requires a societal desire for equality, followed by regulatory action of the government. We must be intentional and explicit with policies crafted for an equitable distribution of wealth as well as a sustainable size of economy. As Herman Daly argued, we need institutions that “limit the degree of inequality … since growth can no longer be appealed to as the answer to poverty.”[iii] For that matter, we can’t simplistically assume that degrowth or a steady state economy would ensure fairness either. Income fairness (not necessarily absolute equivalence of incomes or wealth) is a goal worth formulating policy for.

The violence and general unrest that characterized the labor movement is symptomatic of the link between social stability and income equality. Steady staters should consider and craft policy instruments to address income and wealth inequality. After all, how can a state be steady if it isn’t stable?

 

[i] Simon Kuznets, “Economic Growth and Income Inequality,” The American Economic Review 45, no. 1 (1955): 1–28.

[ii] World Inequality Lab, “World Inequality Database” (World Inequality Lab, 2021), https://wid.world/.

[iii] Herman E. Daly, “The Economics of the Steady State,” The American Economic Review 64, no. 2 (1974): 15–21.

Taylor Lange, CASSE's ecological economistTaylor Lange is an ecological economist with the Center for the Advancement of the Steady State Economy (CASSE).

The post Labor Day Reflections: Growth Doesn’t Solve Inequality appeared first on Center for the Advancement of the Steady State Economy.


The Housing Boom and the Decline in Mortgage Rates

Published by Anonymous (not verified) on Thu, 09/09/2021 - 6:01am in

During the pandemic, national home values and housing activity soared as mortgage rates declined to historic lows. Under the canonical “user cost” house price model, home values are held to be very sensitive to interest rates, especially at low interest rate levels. A calibration of this model can account for the house price boom with the observed decline in interest rates. But empirically, we find that home values are nowhere near as sensitive to interest rates as the user cost model predicts. This lower sensitivity is also found in prior economic research. Thus, the historical experience suggests that lower interest rates can only account for a tiny fraction of the pandemic house price boom. Instead, we find more scope for lower interest rates to explain the rise in housing activity, both sales and construction.

Since February 2020, national home values have risen more than 15 percent across several house price indices. At the same time, existing home sales and building permits for new privately owned housing units have soared to levels last seen in 2007, and Q4/Q4 real residential investment, as measured by the Bureau of Economic Analysis, grew about 16 percent in 2020. Thirty-year fixed rate mortgage rates dropped to an historic low of 2.7 percent in December 2020. At 3 percent during the summer of 2021, mortgage rates remain depressed and 50 basis points below February 2020 levels. How much of the housing boom can be explained with the lower level of interest rates?

Elasticity of House Prices to Mortgage Rates in Theory: The User Cost Model

Standard calibrations of the most popular theoretical framework of housing valuation—the user cost model—can in fact quantitatively explain the rise in house prices with the decline in interest rates. In its simplest form, the model postulates that the raw return on housing, including both the rent yield and growth of rent, should be equal to the sum of borrowing cost and property taxes, maintenance, and insurance (taking housing supply and rents as given):

where is the rent to price yield, g is the expected capital gain rate, ρ is the effective borrowing cost (mortgage rate after tax deduction), and τ accounts for property taxes, maintenance, and insurance.

From this formula, we can calculate how much home prices rise for every 1 percentage point decline in the mortgage rate, or the semi-elasticity of house prices to changes in mortgage rates, which we refer to as the “semi-elasticity.” The chart below illustrates the predicted semi-elasticity under a set of commonly used parameters from Himmelberg et al. (2005) (marginal tax rate at 25 percent, property taxes, maintenance and insurance in total at 4 percent, growth of rent at 3.8 percent). Importantly, the semi-elasticity rises as interest rates decline, meaning that house prices become particularly sensitive to interest rate changes in a low-rate environment. For example, the semi-elasticity is about 23 when mortgage rates are at 4 percent, but it increases to about 30 when mortgage rates are at 3 percent. These statistics suggest that a decline in the mortgage rate from 3.5 percent to 3 percent would cause home prices to rise about 14 percent, which is just about as we observed. But as we show in the rest of the post, these predicted effects are much larger than our empirical estimates and those found in the economic literature.

User Cost Model Predicts High Sensitivity of House Prices to Mortgage Rates

Note: This graph plots the semi-elasticity of house prices to changes in the mortgage rate as a function of the mortgage rate, or how much home prices rise for every 1 percentage point decline in the mortgage rate.

Source: Authors’ calculations.

Elasticity of House Prices and Activity to Mortgage Rates in Practice: Empirical Evidence

We use a Jorda (2005) linear projection framework and quarterly macroeconomic data between 1975 and 2020 to study the semi-elasticity of the FHFA house price index, building permits for single-family units, existing home sales, and residential investment (rescaled as a contribution to real GDP). Estimating the semi-elasticity to interest rates using macroeconomic data is challenging because movements in mortgage rates depend on the state of the economy, which is a confounding factor. Our first econometric specification (no contemporaneous controls) accounts for the economic state by including past realizations of the unemployment rate, the 1-year Treasury rate (as a proxy for monetary policy), and CPI inflation. In the second specification (contemporaneous controls), we include these controls contemporaneously, so that the estimated semi-elasticity only reflects changes in the residual mortgage rate component, which is driven by term premia for long-term rates and the mortgage basis (or spread). All specifications also include lags of each housing variable, so that the linear projections are equivalent to impulse responses from vector autoregressions commonly used in empirical macroeconomic analysis (Plagborg-Moller and Wolf 2021).

The chart below shows the response of each variable when excluding (orange) and including (blue) the contemporaneous controls after a 1 percentage point decline in the mortgage rate. We find significant effects, with the expected signs, and with stronger and more persistent responses when not including controls. The (maximum) semi-elasticity of house prices to mortgage rates is -2, with or without contemporaneous controls. This is less than a tenth as large as what is predicted by the user cost model. Housing activity is, instead, very sensitive to mortgage rates. After a 1 percentage point decline in mortgage rates, permits rise more than 10 percent, existing homes sales increase 5-10 percent, and residential investment expressed as a contribution to real GDP increases 0.3 percentage point (0.2 percentage point with controls).

Mortage Rates Affect House Prices and Housing Activities

Notes: The above charts show impulse responses to a 100 basis point (or 1 percentage point) decline in mortgage rates from linear projections (LPs). The LPs include lags of each dependent variable, mortgage rates, and a set of controls that include the unemployment rate, the 1-year Treasury rate (as proxy for monetary policy), and CPI inflation. The “contemporaneous controls” specification includes these controls up to quarter 0. Blue and orange bands are 90 percent confidence bands for the model, including contemporaneous controls and only lagged controls.

Sources: Authors’ calculations, based on data from Federal Reserve Board, Bureau of Labor Statistics, Freddie Mac, FHFA, Census Bureau, National Association of Realtors, and Bureau of Economic Analysis.

Elasticity of House Prices to Mortgage Rates in Practice: Prior Research

To what extent are our results specific to our statistical model?  Prior research uses both macro and micro data to estimate the semi-elasticity of house prices to interest rates (in contrast, few studies look at the response of housing activity to interest rates). As shown in the table below, most empirical estimates from this literature suggest that house prices increase by less than 5 percent for every 1 percentage point decrease in (long-term) interest rates–substantially less than implied by the user cost model and consistent with our results.

Estimated Effects of Interest Rates on House Prices

PaperU.S./
International
MethodHome Price Appreciation After a 1 Percentage Point Drop in the Mortgage RateMacro Papers:Del Negro and Otrok 2007U.S.VAR0.8Goodhart and Hofmann 2008InternationalVAR1.6Jarocinski and Smets 2008U.S.VAR2Sa, Towbin, and Wieladek, 2011InternationalVAR1.2Williams 2015InternationalFixed exchange rate6.3    Micro Papers:   DeFusco and Paciorek 2017U.S.Bunching around CLL[1.5, 2]Adelino, Schoar, and Severino 2020U.S.Diff in diff around CLL[1.3, 5.3]Davis, Oliner, Peter, Pinto 2020U.S.Cut in FHA insurance premium3.4Fuster and Zafar 2021U.S.Consumer survey2.5

Notes: This table summarizes the literature on the relationship between house prices and interest rates. The “macro papers” panel summarizes five papers from the large literature using macroeconomic data. The second panel reviews four papers based on microeconomic data. The “U.S./International” column reports whether the study is based on U.S. or international data. The “Method” column reports the identification strategy. “CLL” stands for conforming loan limit. The last column reports the estimated effect on house prices after a 1 percentage point interest rate shock. For macro papers, these effects are 10 quarters after a 1 percentage point monetary policy shock. For the micro papers, the effects are for a 1 percentage point shock in the mortgage rate.

One difference between the macro and the micro literatures is in the measure of interest rate used. Macro papers tend to study monetary policy shocks or shocks to long term rates, either nominal or real. Micro studies often focus on mortgage rate shocks using arguably exogenous cutoffs at mortgage origination. For example, one such cutoff is the conforming loan limit (CLL). Most mortgages that are smaller than the CLL are guaranteed by Fannie Mae or Freddie Mac, enjoying lower interest rates than loans that are larger than the CLL, also known as jumbo loans. By looking at house prices around this exogenous cutoff and how they change when the CLL increases, Adelino, Schoar, and Severino (2020) find a semi-elasticity between -5.3 and -1.3.

Rather than using mortgage and house price data, Fuster and Zafar (2021) use the housing module of the New York Fed Survey of Consumer Expectations to elicit how much survey respondents are willing to pay for the same house in two hypothetical scenarios: when the mortgage rate is 4.5 percent or 6.5 percent. They find that even a 2-percentage point increase in the mortgage rate only lowers borrowers’ willingness to pay by 5 percent.

Conclusion

The semi-elasticity of house prices to interest rates implied by the theoretical user cost model suggests that the decline in mortgage rates during the pandemic can quantitatively account for the national house price boom. But our empirical estimates and prior studies suggest that the decline in mortgage rates can only explain low single-digit house price increases. Instead, we find that housing activity, both sales and construction, are very sensitive to interest rates.

Haoyang Liu is an economist in the Federal Reserve Bank of New York’s Research and Statistics Group.

David Lucca is a vice president in the Bank’s Research and Statistics Group.

Dean Parker is a senior research analyst in the Bank’s Research and Statistics Group.

Gabriela Rays-Wahba is a senior research analyst in the Bank’s Research and Statistics Group.

How to cite this post:
Haoyang Liu, David Lucca, Dean Parker, and Gabriela Rays-Wahba, “The Housing Boom and the Decline in Mortgage Rates,” Federal Reserve Bank of New York Liberty Street Economics, September 7, 2021, https://libertystreeteconomics.newyorkfed.org/2021/09/the-housing-boom-a....

Disclaimer
The views expressed in this post are those of the authors and do not necessarily reflect the position of the Federal Reserve Bank of New York or the Federal Reserve System. Any errors or omissions are the responsibility of the authors.

Can sectoral supply shocks have aggregate demand consequences?

Published by Anonymous (not verified) on Tue, 07/09/2021 - 6:00pm in

Ambrogio Cesa-Bianchi and Andrea Ferrero

Restrictions on activity to curb the spread of Covid-19 led to a shutdown of specific parts of the economy. These lockdown measures can be thought of as a shock that suddenly decreases the supply of affected sectors, which lowers output and increases their price. Guerrieri et al (2020) propose a theoretical model of ‘Keynesian supply shocks’ where a sectoral supply shock triggers knock-on effects on demand in other sectors which, if strong enough, can lead to a fall in aggregate prices and output – thus resembling an aggregate demand shock. In a recent paper, we provide empirical evidence supporting this hypothesis using pre-Covid data. Our results suggest a different way to look at the Covid crisis and business cycles in general.

What are Keynesian supply shocks anyway?

An example can help clarify the basic logic of Keynesian supply shocks and their transmission mechanism. Suppose the economy consists of two sectors, entertainment (offering movies in cinemas) and food (producing popcorn). A negative supply shock hits the entertainment sector so that the price of movie tickets increases. What happens to the food sector? If the two goods are substitute, people switch from going to the movies to eating popcorn at home. The demand for popcorn increases, and so does their price to clear the market. If the two goods are complements, however, people do not enjoy eating popcorn without watching movies. In this case, the demand for popcorn falls and so does their price. As a result, the overall effect on prices is likely to be ambiguous. This second case corresponds to a Keynesian supply shock.

New empirical evidence

In a recent paper, we offer empirical support to the notion of Keynesian supply shocks using data on gross output and prices for 64 sectors of the US economy from 2005 Q1 to 2019 Q4. Any approach that only relies on aggregate data would simply classify sectoral supply shocks with aggregate demand consequences as aggregate demand shocks. Yet, sectoral data per-se are not a silver bullet, as separating sectoral shocks that have aggregate consequences from true aggregate shocks poses severe identification challenges.

In our paper, we pursue a third route that does not require to explicitly separate aggregate shocks from sectoral shocks with aggregate consequences. The intuition for our approach is that while aggregate demand shocks and Keynesian supply shocks imply the same restrictions on the response of aggregate data – both giving rise to positive comovement between quantities and prices – the sectoral responses to these shocks are different. True aggregate demand shocks should move quantities and prices in the same direction in all sectors. Keynesian supply shocks should instead move quantities and prices in opposite directions for those sectors that are directly hit by the sectoral shocks.

We formalize this intuition by specifying a multi-sector VAR model where sectoral output growth and inflation load on a vector of unobserved common factors that capture the comovement across sectors. Three key steps underpin our empirical analysis. First, we proxy the common factors by means of cross-sectional averages of the sectoral data, ie, with aggregate output growth and inflation. Second, we employ a standard sign restriction approach to extract two structural shocks from the common factors, one that leads to positive comovement between quantities and prices and one that leads to negative comovement between quantities and prices. We label these innovations aggregate ‘demand-like’ and aggregate ‘supply-like’ shocks, respectively, as their effects might be the result of truly aggregate shocks as well as sector-specific shocks with aggregate effects. Third, and finally, we estimate the sectoral loadings on the identified aggregate demand-like shock, which are key objects of interest of our analysis. These objects capture the impact response of each sector’s quantities and prices to the aggregate demand-like shock.

What the paper finds

While sectoral output and prices typically comove in response to aggregate demand-like shocks – mimicking the behaviour of their aggregate counterparts – in about 40% of cases we find that the two variables move in opposite directions. Our interpretation is that standard shock identification techniques that impose restrictions on aggregate data only (as the ones we use to extract the structural shocks form the common factors) mis-classify shocks. In particular, some aggregate demand-like shocks are likely to be the consequence of a sectoral supply shock with strong complementarities at play – the Keynesian supply mechanism. Importantly, our sample ends in 2019 Q4 and thus the Covid episode does not drive the identification of the sectoral responses. Through the lenses of our analysis, the response to the pandemic has just been an extreme realization of a more general structural feature of the US economy.

As an example, the figure reports the distribution of the factor loadings for output growth (yellow) and inflation (blue) to a negative aggregate demand-like shock in two selected sectors. The left panel, which refers to the Accommodation sector, shows the example of a sector that responds to the demand shock in line with the restriction imposed at the aggregate level, ie, with prices and quantities moving in the same direction. However, in many sectors the response of output growth and inflation is inconsistent with such a notion of demand shocks. For example, in the Apparel and leather and allied products sector (right panel), a fall in output growth is accompanied by an increase in inflation. This pattern is a robust feature of the pre-Covid data across many sectors of the US economy, suggesting that Keynesian supply shocks may be a regular feature of business cycles.

Policy implications

The distinction between ‘true’ aggregate demand shock versus Keynesian supply shocks matters, even though both lead to a contraction in output and inflation. If Keynesian supply shocks are quantitatively relevant, monetary policy is less likely to face trade-offs between output and inflation. As a corollary, policymakers can respond more aggressively to shocks, even when uncertain about their nature. In the first stages of the pandemic, substantial disagreement around the future evolution of inflation emerged, as the economic effects of the Covid-19 outbreak and the policy response combined supply and demand aspects. According to our empirical findings, the balance of risks would have been more skewed towards a fall in inflation than a standard aggregate supply/demand framework would have implied, thus justifying an aggressive monetary policy easing.

More generally, our findings suggest that breaking the dichotomy between aggregate demand and supply disturbances may be a fruitful avenue to advance our understanding of the sources of business-cycle fluctuations and, crucially, to design the appropriate policy responses.

Ambrogio Cesa-Bianchi works in the Bank’s Global Analysis Division and Andrea Ferrero works at the University of Oxford.

If you want to get in touch, please email us at bankunderground@bankofengland.co.uk or leave a comment below.

Comments will only appear once approved by a moderator, and are only published where a full name is supplied. Bank Underground is a blog for Bank of England staff to share views that challenge – or support – prevailing policy orthodoxies. The views expressed here are those of the authors, and are not necessarily those of the Bank of England, or its policy committees.

Household debt and consumption revisited

Published by Anonymous (not verified) on Wed, 01/09/2021 - 6:00pm in

Philip Bunn and May Rostom

The academic literature finds that the build-up of household debt before the 2008 financial crisis is linked to weaker consumption afterwards. But there is wider debate over the mechanisms at play. One strand of literature emphasises debt overhang acting through the level of leverage. Others find it was over-optimism acting through leverage growth. In this post, we revisit our previous analysis on leverage and consumption in the UK using synthetic cohort analysis. The correlation between leverage measures and their link to other macroeconomic variables mean it’s challenging to tease out their effects. Yet we find that whilst both mechanisms played a role, there is evidence that debt overhang linked to a tighter credit constraints was the bigger driver.

In the UK, the ratio of household debt to income rose from about 85% in 1997 to almost 150% in 2007, with much of that increase accounted for by increases in mortgage debt (Chart 1). For most of this period, household consumption growth was close to its historical average – there was no sharp acceleration in spending – and inflation was low and stable. When the crisis hit, consumption fell sharply. How might the build-up in debt have affected households’ consumption response in the wake of the GFC?

Chart 1: Household debt to income ratio rose sharply prior to the financial crisis while consumption growth was close to average

Academics have put several hypotheses forward to explain this relationship. For this post, we examine two of them: pre-crisis overoptimism, and debt overhang. These two hypotheses are not necessarily mutually exclusive. In fact, both are likely relevant, but not all animals are equal, and some are more important than others.

The over-optimism hypothesis

One view argues that households adjusted their expectations downwards. In this view, prior to the crisis, some households were buoyed by looser credit conditions and rapid growth in house prices increased the amount of collateral homeowners could borrow against. Households who felt positive about the future, could have also been optimistic about their future income, and would have been comfortable to increase leverage quickly. The more optimistic households had to revise down their future income expectations and also cut back their spending by more than others during the GFC, and so this correlates with the change in leverage.

In one paper, Andersen et al (2016) find a strong correlation between the increase in pre-crisis leverage and Danish spending patterns during the recession, but less so for the level. Similarly, in an aggregate-level cross-country comparison, Broadbent (2019) shows that growth in debt between 2005 and 2007 was a better predictor of the economic downturn than the level. However, none of these studies specifically refer to the UK.

The debt overhang hypothesis

A second hypothesis relates to the size of outstanding debt. Households who were highly leveraged going into the crisis faced binding borrowing constraints once credit conditions tightened, limiting their ability to refinance or borrow more. For the UK, this was important: during the GFC many mortgagors were on two-year fixed rates that were refinanced often. This ‘debt overhang’ may have caused these households with higher levels of leverage to cut back spending more. Indeed, a number of studies point to the importance of the level of pre-crisis leverage in explaining the weakness of consumption growth (Dynan (2012) and Baker (2018)), with debtors having higher marginal propensities to consume (Mian et al (2013)).

Credit constraints may also play a role here. Prior to the crisis, mortgage products with loan to value (LTV) ratios greater than 90% were common in the UK, and in some cases offered loans greater than the value of the property – the most infamous example being Northern Rock’s ‘Together’ mortgage at 125% LTV. When the crisis hit, those high LTV products disappeared (Chart 2). This reduction in credit availability will also have been amplified by falls in house prices, which will have raised a household’s outstanding LTV ratio for a given level of debt. UK house prices fell by up to 13% between 2007 and 2009. Taking these two facts together, any household going into the crisis with an outstanding LTV above 75% would have struggled to refinance their mortgage or take on additional debt. LTV ratios are primarily associated with the level of debt. We estimate that around 15% of mortgagors were in this position. These were primarily young households: the average age was 35, and five out of six were younger than 45.

Chart 2: There were very few mortgages with LTVs above 90% during the financial crisis

Revisiting the debate: micro evidence for the UK

We revisit our previous analysis (Bunn and Rostom (2015)) on the comovement between leverage and consumption during the GFC, to consider the role of these different measures and associated hypotheses. For the UK, household-level panel data containing both debt and consumption around the GFC are not available, so we track groups of households, or cohorts, over time using the well-established methodology of Deaton (1985). Nevertheless, our results tell a plausible story.

One challenge with this exercise is that all measures of debt we examine are well correlated – this means it’s hard to definitively conclude which leverage measures are driving this effect. For example, those with high levels of debt to income often also saw strong growth in their leverage (Chart 3).

Chart 3: Measures of the growth and level of leverage are well correlated

Table A reports the results of our reduced-form regressions for the growth in household spending over the financial crisis on different debt measures as explanatory variables (full details in the technical appendix). We also control for other factors such as income growth, wealth and household composition.

Table A: Regressions for household spending during the financial crisis with different debt metrics

Looking at levels of leverage, column 1 shows that groups of households who went into the crisis with high loan to incomes (LTI) made larger cuts in spending during it. In column 2, the level of LTI is replaced with the change in LTI between 2003/04 and 2006/07. Again this is significant, showing that groups of households who experienced earlier rapid growth in debt also made larger reductions in spending. Putting these two debt measures in together in column 3, the coefficient on both falls, but only the change in LTI remains statistically significant. This result is the same if average LTI of a cohort is replaced by the percentage of high LTI households within each cohort and is consistent with the findings of Andersen et al (2016) for Denmark.

But this picture is incomplete because it abstracts from credit constraints. In column 4, we include a measure of the percentage of households in each cohort with a pre-crisis LTV above 75%. This measure, which is based on the level of leverage, aims to capture credit-constrained households. This metric also has a negative and statistically significant relationship with consumption growth during the financial crisis.

However, when we add the change in LTI in column 5, the coefficients on both the change in LTI and percentage of credit constrained households remain significant. Both coefficients are smaller than when they are included on their own. This relationship during the crisis is not seen in earlier periods when credit conditions were looser (see column 6 for one such example). 

As well as the statistical significance of the estimated coefficients, it is important to also consider their economic significance. Chart 4 shows that the magnitudes of spending cuts associated with debt during the GFC implied by all five equations are similar (the black diamonds), at just under 2% of aggregate private consumption. However, they differ on how to apportion it (the coloured bars). In equations 1, 2 and 4 only one channel is included by definition). In equation 5 – the only equation with two statistically significant debt measures – the percentage of credit constrained households accounts for 60% of the total effect, and the increase in debt in the run up to the crisis accounts for the remaining 40%.

Chart 4: Size of spending cuts associated with debt

Conclusion

What can these empirical results tell us about the co-movement between debt and consumption during the financial crisis in the UK? The strength of the correlation between the different measures of leverage make it challenging to conclude the mechanism at play and to definitively prove causation. Nevertheless, they do support an important role for debt overhang, driven by a tightening in credit conditions, and typically captured by the level of leverage. And the role of credit constraints here is supported by the significant relationship between the percentage of households with an LTV ratio above 75% going into the crisis, and cuts in consumption during it.

There can also be more than one explanation, and we do find some weaker support for overoptimism too, although it is curious there was little sign of a large pre-crisis consumption boom in the macro data. In the run up to the GFC, aggregate consumption growth was close to its historical average and nothing like the boom of the late 1980s, implying that any such pre-crisis effects were probably modest.

Technical appendix

The data used on the regressions in Table A are described in more detail in Bunn and Rostom (2015) but the key points are summarised below:

Sample definition: Equations are estimated using cohort data where cohorts are defined by single birth year of the household head and mortgagor/non-mortgagor status. Only households where the head is aged 21–69 are included. The specification reported in equation 1 differs from the equivalent regression reported in Bunn and Rostom (2015) as cohort cells with insufficient observations, after calculating lagged changes in debt, are dropped.

Data sources: Living Costs and Food Survey for all variables except LTV and measures of wealth. For equations 1 to 5, LTV and wealth are from the Wealth and Assets survey, and from the British Household Panel Survey for equation 6.

Additional controls: All equations also include controls for income growth, changes in household composition and growth in housing and financial wealth and a constant. Equation 6 does not include a control for financial wealth due to data availability. 

Variable definitions: ∆lnC is the log change in non-housing consumption, LTI is the ratio of outstanding mortgage LTI ratio, ∆LTI is the change in the ratio of outstanding mortgage LTI ratio, LTV share>75% is the percentage of households in each cohort with an outstanding LTV ratio of more than 75%. For equations 1 to 5 period t is 2009/10 and t-1 is 2006/07. ∆LTIt-1 represents the change between 2003/04 and 2006/07. For equation 6 period t is 2006/07 and t-1 is 2003/04. ∆LTIt-1 represents the change between 2000/01 and 2003/04.

Philip Bunn works in the Bank’s Structural Economics Division and May Rostom works in the Bank’s Monetary Policy Outlook Division.

If you want to get in touch, please email us at bankunderground@bankofengland.co.uk or leave a comment below.

Comments will only appear once approved by a moderator, and are only published where a full name is supplied. Bank Underground is a blog for Bank of England staff to share views that challenge – or support – prevailing policy orthodoxies. The views expressed here are those of the authors, and are not necessarily those of the Bank of England, or its policy committees.

Monetary policy, sectoral comovement and the credit channel

Published by Anonymous (not verified) on Tue, 17/08/2021 - 11:30pm in

Federico Di Pace and Christoph Görtz

There is ample evidence that a monetary policy tightening triggers a decline in consumer price inflation and a simultaneous contraction in investment and consumption (eg Erceg and Levin (2006) and Monacelli (2009)). However, in a standard two-sector New Keynesian model, consumption falls while investment increases in response to a monetary policy tightening. In a new paper, we propose a solution to this problem, known as the ‘comovement puzzle’. Guided by new empirical evidence on the relevance of frictions in credit provision, we show that adding these frictions to the standard model resolves the comovement puzzle. This has important policy implications because the degree of comovement between consumption and investment matters for the effectiveness of monetary policy.

Empirical evidence on the comovement puzzle

We first conduct an empirical investigation into the source of comovement and find this to be the credit channel, ie the provision of funds by the financial sector to finance production and investment activities of non-financial firms. To do so, we employ state-of-the-art methodology to identify (conventional) US monetary policy shocks in a structural vector autoregression (SVAR) by using an external instrument recently proposed by Miranda-Agrippino and Ricco (2021). This instrument utilises high-frequency data to ensure robustness to the presence of information frictions in the economy. Chart 1 shows the impulse responses to an unexpected rise in the policy rate. We find that a contractionary monetary policy shock triggers a decline in both, investment and consumption goods production. We find this goes hand in hand with a tightening in financial conditions, evident from the rise in the excess bond premium. Chart 2 shows that the shock also triggers a substantial and persistent contraction in private banks’ equity capital. This tightening in financial conditions is evidence that the transmission mechanism of monetary policy operates via financial markets and its link with investment. The credit channel has become increasingly popular after the global financial crisis and its empirical relevance has recently been stressed particularly for the transmission of monetary policy shocks (Caldara and Herbst (2019) and Miranda-Agrippino and Ricco (2021)).

Chart 1: Empirical responses to a contractionary monetary policy shock

Chart 2: The response of bank equity to a contractionary monetary policy shock

The importance of supply-side financial frictions

No existing studies that address the comovement puzzle account for the empirical movements in bank equity and credit spreads (see eg DiCecio (2009); Sterk (2010); Carlstrom and Fuerst (2009); Katayama and Kim (2013); Di Pace and Hertweck (2019) among others). They either assume frictionless financial markets, or consider frictions in firms’ demand for investment funding. Neither can give rise to positive credit spreads, nor account for the documented supply-side frictions in financial markets. As such, they miss out an important channel that has been widely acknowledged in the literature as important for the transmission and amplification of various economic shocks. Further, the empirical evidence on the relevance of the credit channel begs for an extension of the two-sector New Keynesian model with a mechanism that accounts for these supply-side financial frictions. This mechanism can potentially be helpful also in aligning the model predictions with the empirical evidence and thereby offering an appealing way to resolve the comovement puzzle. This is exactly what we propose in our paper.

The role of financial frictions to resolve the comovement puzzle

We resolve the comovement puzzle by building on the New Keynesian model of Görtz and Tsoukalas (2017), which has two distinct sectors that produce consumption and investment goods, respectively. We account for credit frictions, which our empirical analysis found to be the source of the comovement, by adding financial frictions as in Gertler and Karadi (2011). These frictions arise because financial intermediaries are subject to an endogenous leverage constraint since the value of collateral limits banks’ ability to fund the real economy. As such, the two-sector New Keynesian model consists of standard components and nests a number of widely used frameworks (eg Justiniano et al (2011) or DiCecio (2009)). The model also includes the usual set of nominal and real rigidities that the literature has found important to match the hump-shaped responses in the data.

Chart 3: Model responses to a contractionary monetary policy shock

Chart 3 shows the model’s impulse responses to a contractionary monetary policy shock. We consider our baseline model with financial frictions (blue solid lines) and a model version without frictions in financial markets (red dashed lines). The chart illustrates that a rise in the policy rate in our baseline model triggers a decline in total output, as well as in investment and consumption. This model can solve the comovement puzzle and match the empirically observed rise in credit spreads alongside a sharp decline in bank’s equity capital. In contrast, the responses of the standard two-sector New Keynesian model without financial frictions display a comovement problem: the red dashed lines in Chart 3 show that investment does not decline alongside consumption in response to the tightening in monetary policy. In addition, the model without financial frictions falls short in matching the empirically observed rise in credit spreads and the decline in bank equity capital.

Why does the standard New Keynesian two-sector model fail to deliver comovement between consumption and investment? In the model without financial frictions, a rise in the policy rate causes a decline in inflation and a contraction in demand for consumer goods and services. This causes a decline in production in the consumption sector as firms that cannot adjust their prices reduce output in line with the reduction in demand. The fall in output in the consumption sector reduces the demand for labour in that sector, which in turn puts downward pressure on real wages and real marginal cost. Since labour is perfectly mobile across sectors, real wages also decline in the investment goods sector making it cheaper to produce these goods. For this reason, the relative price of investment goods falls, causing the demand for investment goods (ie, capital accumulation) to rise, contrary to what is seen in the data.

What is the mechanism that allows for comovement of investment and consumption in the baseline model? The baseline model exhibits an additional channel that dampens capital accumulation by weakening the financial position of banks. Chart 3 shows that the contractionary monetary policy shock reduces the price of capital (Tobin’s marginal Q). Since banks collateralise debt against the value of firms’ physical capital stock, a fall in the price of capital reduces the collateral value of capital claims, resulting in in a deterioration of bank equity capital. The severe contraction in equity capital, shown in Chart 3, is consistent with our empirical evidence. This is important as the dynamic response of equity capital is behind a sharp contraction in credit supply. Since banks are highly leveraged, the decline in bank equity exacerbates the reduction in lending to the real economy. The reduced funding for investment projects results in a contraction in investment. To rebuild their balance sheets, banks must charge a higher interest rate over the base rate, thereby increasing credit spreads.

The tightening of credit conditions in the baseline model is the channel, in comparison to the frictionless model, that limits the financing of investment projects and thereby induces a contraction in investment. Alongside the reduction in consumption, the fall in investment is consistent with the empirical evidence on comovement of sectoral outputs. Also consistent with their empirical counterparts are the dynamics of credit spreads and bank equity, which are crucial for facilitating comovement across expenditure categories.

Concluding remarks

Developing structural models which resolve the comovement puzzle has important policy implications since a lack of comovement can result in near monetary neutrality. We show that supply-side financial frictions are an important mechanism to resolve the comovement puzzle. This channel can be complemented by other mechanisms suggested in the literature – we show for example that the financial channel is strengthened by the introduction of nominal wage rigidities. However, our work accounts for an important dimension that has been ignored in the existing literature on the comovement puzzle. As a distinguishing feature to previous work, we highlight the importance of the financial channel. It helps not only matching the empirical comovement between expenditure categories, but accounting for financial frictions is also crucial to resemble the empirical responses of the excess bond premium and bank equity to an unexpected monetary contraction.

Federico Di Pace works in the Bank’s Monetary Policy Outlook Division and Christoph Görtz is a senior lecturer in macroeconomics at the University of Birmingham.

If you want to get in touch, please email us at bankunderground@bankofengland.co.uk or leave a comment below.

Comments will only appear once approved by a moderator, and are only published where a full name is supplied. Bank Underground is a blog for Bank of England staff to share views that challenge – or support – prevailing policy orthodoxies. The views expressed here are those of the authors, and are not necessarily those of the Bank of England, or its policy committees.

How Does U.S. Monetary Policy Affect Emerging Market Economies?

Published by Anonymous (not verified) on Wed, 04/08/2021 - 3:15am in

Ozge Akinci and Albert Queralto

How Does U.S. Monetary Policy Affect Emerging Market Economies?

The question of how U.S. monetary policy affects foreign economies has received renewed interest in recent years. The bulk of the empirical evidence points to sizable effects, especially on emerging market economies (EMEs). A key theme in the literature is that these spillovers operate largely through financial channels—that is, the effects of a U.S. policy tightening manifest themselves abroad via declines in international risky asset prices, tighter financial conditions, and capital outflows. This so-called Global Financial Cycle has been shown to affect EMEs more forcefully than advanced economies. It is because higher U.S. policy rates have a disproportionately larger impact on rates in EMEs. In our recent research, we develop a model with cross-border financial linkages that provides theoretical foundations for these empirical findings. In this Liberty Street Economics post, we use the model to illustrate the spillovers from a tightening of U.S. monetary policy on credit spreads and on the uncovered interest rate parity (UIP) premium in EMEs with dollar-denominated debt.

Real Effects of U.S. Monetary Policy on EMEs

We start by estimating the effects of U.S. monetary policy on EMEs, using a structural vector autoregressive model (SVAR) model including EME GDP and U.S. variables such as GDP, inflation, unemployment, capacity utilization, consumption, investment, and the federal funds rate for the period 1978:Q1-2008:Q4. The key identification assumption in the SVAR model is that the only variable that the U.S. monetary policy shock affects contemporaneously is the federal funds rate.

The results are shown the chart below. The red line in each panel indicates the point estimates of the impulse response functions, while the gray dotted lines mark the corresponding 95 percent probability bands. The blue line shows the predictions of our model, where we calibrate the larger economy to the United States, and take the smaller economy to represent a bloc of EMEs, such as the Asian or the Latin American EMEs. Starting with the U.S. economy, the model captures the dynamic response of U.S. output to a U.S. monetary policy shock remarkably well. A monetary policy innovation that raises the U.S. federal funds rate by 100 basis points induces U.S. output to fall around 0.50 percent at the trough, very close in magnitude to those implied by our model.

How Does U.S. Monetary Policy Affect Emerging Market Economies?

We next turn to the spillovers to emerging markets. In response to the same shock, EME output falls around 0.45 percent at the trough, broadly comparable in magnitude to the decline in U.S. GDP, and remains below its baseline path well after the effect of the shock on interest rates is gone. Our model captures both the magnitude and the persistence of the response of EME output reasonably well, although the model-implied EME output response is somewhat less sluggish than the SVAR-implied one.

Disentangling Channels of Spillovers

Having shown that the model’s predictions on the spillovers of a U.S. monetary policy shock on EMEs are plausible, we next use it to disentangle channels through which these shocks transmit to EMEs. Our model predicts two channels of transmission: the trade channel and the financial channel. The chart below displays how much EME GDP would be affected through each channel.

How Does U.S. Monetary Policy Affect Emerging Market Economies?

The trade channel operates through a fall in EME exports due to lower U.S. demand. This effect is partially offset by EME exports becoming cheaper as the dollar appreciates. Overall, EME output declines by about 0.10 percent relative to baseline due to the trade channel.

The financial channel operates through lower investment spending due to both rising credit spreads and larger UIP risk premia (shown in the lower left and lower right panels of the chart, respectively). Note that UIP risk premia are defined as the difference between the required return by global investors for lending to EMEs (adjusted for expected exchange rate changes) and the return on U.S. safe assets. To highlight the amplification role played by the deviations from UIP, we first shut down this channel (by assuming that UIP holds at all times and that there is no dollar debt in EME balance sheets), and show the predicted effects by the gold lines in the four panels of the chart. The drop in EME asset prices following the U.S. rate hike works to initiate losses in EME borrowers’ balance sheets. Weaker EME balance sheets then give rise to higher domestic lending spreads, making credit more expensive for EME borrowers and triggering declines in investment, and ultimately slowing economic activity. This is the standard financial accelerator effect typically present in models with credit market frictions, causing EME output to fall by an additional 0.15 percent below baseline.

Our model adds an additional amplification mechanism based on the interaction between balance sheets and external financing conditions. Now, the EME’s exchange rate depreciation following the U.S. rate hike causes additional losses in EME borrowers’ balance sheets (over and above the effects of the drop in EME asset prices). This occurs due to the presence of some dollar debt on the balance sheet of these borrowers. Because the assets held by EME borrowers are denominated in the local currency, the depreciation of the local currency against the dollar that occurs in the wake of the U.S. tightening raises the real burden of the dollar debt, thus reducing borrowers’ net worth further. In equilibrium, a weakening of local balance sheets widens the deviation from UIP, which in turn is accommodated via a depreciation of the EME currency against the dollar. Because local balance sheets are partly mismatched, a weaker local currency then feeds back into balance sheet health, further weakening it. The result is sharply amplified declines in the value of EME currency and investment, more than offsetting the positive effect of depreciation on EME exports, that brings total decline in EME output to 0.45 percent relative to baseline.

Testing the Link between UIP Deviations and Financial Stress

The model features a time-varying UIP risk premium that increases with the domestic lending spreads in EMEs. We test this prediction of the model using data from Korea and present the results in the table below.

How Does U.S. Monetary Policy Affect Emerging Market Economies?

The second column shows our results, where we regress the change in real exchange rate on the changes in the interest differential and the corporate bond spread. We find that the coefficient on the spread is highly statistically significant, and the presence of the spread improves the fit considerably. In the third and fourth columns, we include an indicator variable for the crisis periods, which takes unity in the months 1998:8–1999:3 and 2008:9–2009:3, and zero otherwise, and a measure of global risk aversion (proxied by the VIX), respectively. As shown, the coefficient on the spread continues to be significant, lending support to the mechanism we propose in the model.

In sum, we present a model where the effects of a U.S. monetary policy shock on EMEs are amplified due to UIP premia that are correlated with domestic lending spreads, consistent with the evidence. Our research provides theoretical foundations for the Global Financial Cycle that shows monetary contractions in the United States lead to tightening of foreign financial conditions, and for more recent findings that show these effects are larger in EMEs than in advanced economies.

Ozge Akinci
Ozge Akinci
is a senior economist in the Federal Reserve Bank of New York’s Research and Statistics Group.

Albert QueraltoAlbert Queralto is a principal economist at the Board of Governors of the Federal Reserve System.

How to cite this post:

Ozge Akinci and Albert Queralto, “How Does U.S. Monetary Policy Affect Emerging Market Economies?,” Federal Reserve Bank of New York Liberty Street Economics, May 17, 2021, https://libertystreeteconomics.newyorkfed.org/2021/05/how-does-us-moneta....

Disclaimer

The views expressed in this post are those of the authors and do not necessarily reflect the position of the Federal Reserve Bank of New York or the Federal Reserve System. Any errors or omissions are the responsibility of the authors.

Alternative Visions of Inflation

Published by Anonymous (not verified) on Wed, 28/07/2021 - 3:57am in

Like many people, I’ve been thinking a bit about inflation lately. One source of confusion, it seems to me, is that underlying concept has shifted in a rather fundamental way, but the full implications of this shift haven’t been taken on board.

I was talking with my Roosevelt colleague Lauren Melodia about inflation and alternative policies to manage it, which is a topic I hope Roosevelt will be engaging in more in the later part of this year. In the course of our conversation, it occurred to me that there’s a basic source of confusion about inflation. 

Many of our ideas about inflation originated in the context of a fixed quantity money. The original meaning of the term “inflation” was an increase in the stock of money, not a general increase in the price level. Over there you’ve got a quantity of stuff; over here you’ve got a quantity of money. When the stock of money grows rapidly and outpaces the growth of stuff, that’s inflation.

 In recent decades, even mainstream economists have largely abandoned the idea of the money stock as a meaningful economic quantity, and especially the idea that there is a straightforward relationship between money and inflation.

Here is what a typical mainstream macroeconomics textbook — Olivier Blanchard’s, in this case; but most are similar — says about inflation today. (You can just read the lines in italics.) 

There are three stories about inflation here: one based on expected inflation, one based on markup pricing, and one based on unemployment. We can think of these as corresponding to three kinds of inflation in the real world — inertial, supply-drive, and demand-driven. What there is not, is any mention of money. Money comes into the story only in the way that it did for Keynes: as an influence on the interest rate. 

To be fair, the book does eventually bring up the idea of a direct link between the money supply and inflation, but only to explain why it is obsolete and irrelevant for the modern world:

Until the 1980s, the strategy was to choose a target rate of money growth and to allow for deviations from that target rate as a function of activity. The rationale was simple. A low target rate of money growth implied a low average rate of inflation. … 

That strategy did not work well.

First, the relation between money growth and inflation turned out to be far from tight, even in the medium run. … Second, the relation between the money supply and the interest rate in the short run also turned out also to be unreliable. …

Throughout the 1970s and 1980s, frequent and large shifts in money demand created serious problems for central banks. … Starting in the early 1990s, a dramatic rethinking of monetary policy took place based on targeting inflation rather than money growth, and the use of an interest rate rule.

Obviously, I don’t endorse everything in the textbook.1 (The idea of a tight link between unemployment and inflation is not looking much better than the idea of a tight link between inflation and the money supply.) I bring it up here just to establish that the absence of a link between money growth and inflation is not radical or heterodox, but literally the textbook view.

One way of thinking about the first Blanchard passage above is that the three stories about inflation correspond to three stories about price setting. Prices may be set based on expectations of where prices will be, or prices may be set based on market power (the markup), or prices may be set based on costs of production. 

This seems to me to be the beginning of wisdom with respect to inflation: Inflation is just an increase in prices, so for every theory of price setting there’s a corresponding theory of inflation. There is wide variation in how prices get set across periods, countries and markets, so there must be a corresponding variety of inflations. 

Besides the three mentioned by Blanchard, there’s one other story that inflation is perhaps even more widespread. We could call this too much spending chasing too little production. 

The too-much-spending view of inflation corresponds to a ceiling on output, rather than a floor on unemployment, as the inflationary barrier. As the NAIRU has given way to potential output as the operational form of supply constraints on macroeconomic policy, this understanding of inflation has arguably become the dominant one, even if without formalization in textbooks. It overlaps with the unemployment story in making current demand conditions a key driver of inflation, even if the transmission mechanism is different. 

Superfically “too much spending relative to production” sounds a lot like “too money relative to goods.” (As to a lesser extent does “too much wage growth relative to productivity growth.”) But while these formulations sound similar, they have quite different implications. Intuitions formed by the old quantity-of-money view don’t work for the new stories.

The older understanding of inflation, which runs more or less unchanged from David Hume through Irving Fisher to Milton Friedman and contemporary monetarists, goes like this. There’s a stock of goods, which people can exchange for their mutual benefit. For whatever reasons, goods don’t exchange directly for other goods, but only for money. Money in turn is only used for purchasing goods. When someone receives money in exchange for a good, they turn around and spend it on some good themselves — not instantly, but after some delay determined by the practical requirements of exchange. (Imagine you’ve collected your earnings from your market stall today, and can take them to spend at a different market tomorrow.) The total amount of money, meanwhile, is fixed exogenously — the quantity of gold in circulation, or equivalently the amount of fiat tokens created by the government via its central bank.

Under these assumptions, we can write the familiar equation

MV = PY

If Y, the level of output, is determined by resources, technology and other “real” factors, and V is a function of the technical process of exchange — how long must pass between the receipt of money and it spending — then we’re left with a direct relationship between the change in M and the change in P. “Inflation is always and everywhere a monetary phenomenon.”2

I think something like this underlies most folk wisdom about inflation. And as is often the case, the folk wisdom has outlived whatever basis in reality it may once have had.3

Below, I want to sketch out some ways in which the implications of the excessive-spending-relative-to-production vision of inflation are importantly different from those of the excessive-money-relative-to-goods vision. But first, a couple of caveats.

First, the idea of a given or exogenous quantity of money isn’t wrong a priori, as a matter of logic; it’s an approximation that happens not to fit the economy in which we live. Exactly what range of historical settings it does fit is a tricky question, which I would love to see someone try to answer. But I think it’s safe to say that many important historical inflations, both under metallic and fiat regimes, fit comfortably enough in a monetarist framework. 

Second, the fact that the monetarist understanding of inflation is wrong (at least for contemporary advanced economies) doesn’t mean that the modern mainstream view is right. There is no reason to think there is one general theory of inflation, any more than there is one general etiology of a fever. Lots of conditions can produce the same symptom. In general, inflation is a persistent, widespread rise in prices, so for any theory of price-setting there’s a corresponding theory of inflation. And the expectations-based propagation mechanism of inertial inflation — where prices are raised in the expectation that prices will rise — is compatible with many different initial inflationary impulses. 

That said — here are some important cleavages between the two visions.

1. Money vs spending. More money is just more money, but more spending is always more spending on something in particular. This is probably the most fundamental difference. When we think of inflation in terms of money chasing a given quantity of goods, there is no connection between a change in the quantity of money and a change in individual spending decisions. But when we think of it in terms of spending, that’s no longer true — a decision to spend more is a decision to spend more on some specific thing. People try to carry over intuitions from the former case to the latter, but it doesn’t work. In the modern version, you can’t tell a story about inflation rising that doesn’t say who is trying to buy more of what; and you can’t tell a story about controlling inflation without saying whose spending will be reduced. Spending, unlike money, is not a simple scalar.

The same goes for the wages-markup story of the textbook. In the model, there is a single wage and a single production process. But in reality, a fall in unemployment or any other process that “raises the wage” is raising the wages of somebody in particular.

2. Money vs prices. There is one stock of money, but there are many prices, and many price indices. Which means there are many ways to measure inflation. As I mentioned above, inflation was originally conceived of as definitionally an increase in the quantity of money. Closely related to this is the idea of a decrease in the purchasing power of money, a definition which is still sometimes used. But a decrease in the value of money is not the same as an increase in the prices of goods and services, since money is used for things other than purchasing goods and services.  (Merijn Knibbe is very good on this.4) Even more problematically, there are many different goods and services, whose prices don’t move in unison. 

This wasn’t such a big deal for the old concept of inflation, since one could say that all else equal, a one percent increase in the stock of money would imply an additional point of inflation, without worrying too much about which specific prices that showed up in. But in the new concept, there’s no stock of money, only the price changes themselves. So picking the right index is very important. The problem is, there are many possible price indexes, and they don’t all move in unison. It’s no secret that inflation as measured by the CPI averages about half a point higher than that measured by the PCE. But why stop there? Those are just two of the infinitely many possible baskets of goods one could construct price indexes for. Every individual household, every business, every unit of government has their own price index and corresponding inflation rate. If you’ve bought a used car recently, your personal inflation rate is substantially higher than that of people who haven’t. We can average these individual rates together in various ways, but that doesn’t change the fact that there is no true inflation rate out there, only the many different price changes of different commodities.

3. Inflation and relative prices. In the old conception, money is like water in a pool. Regardless of where you pour it in, you get the same rise in the overall level of the pool.

Inflation conceived of in terms of spending doesn’t have that property. First, for the reason above — more spending is always more spending on something. If, let’s say for sake of argument, over-generous stimulus payments are to blame for rising inflation, then the inflation must show up in the particular goods and services that those payments are being used to purchase — which will not be a cross-section of output in general. Second, in the new concept, we are comparing desired spending not to a fixed stock of commodities, but to the productive capacity of the economy. So it matters how elastic output is — how easily production of different goods can be increased in response to stronger demand. Prices of goods in inelastic supply — rental housing, let’s say — will rise more in response to stronger demand, while prices of goods supplied elastically — online services, say — will rise less. It follows that inflation, as a concrete phenomenon, will involve not an across-the-board increase in prices, but a characteristic shift in relative prices.

This is a different point than the familiar one that motivates the use of “core” inflation — that some prices (traditionally, food and energy) are more volatile or noisy, and thus less informative about sustained trends. It’s that  when spending increases, some goods systematically rise in price faster than others.

This recent paper by Stock and Watson, for example, suggests that housing, consumer durables and food have historically seen prices vary strongly with the degree of macroeconomic slack, while prices for gasoline, health care, financial services, clothing and motor vehicles do not, or even move the opposite way. They suggest that the lack of a cyclical component in health care and finance reflect the distinct ways that prices are set (or imputed) in those sectors, while the lack of a cyclical component in gas, clothing and autos reflects the fact that these are heavily traded goods whose prices are set internationally. This interpretation seems plausible enough, but if you believe these numbers they have a broader implication: We should not think of cyclical inflation as an across the board increase in prices, but rather as an increase in the price of a fairly small set of market-priced, inelastically supplied goods relative to others.

4. Inflation and wages. As I discussed earlier in the post, the main story about inflation in today’s textbooks is the Phillips curve relationship where low unemployment leads to accelerating inflation. Here it’s particularly clear that today’s orthodoxy has abandoned the quantity-of-money view without giving up the policy conclusions that followed from it.

In the old monetarist view, there was no particular reason that lower unemployment or faster wage growth should be associated with higher inflation. Wages were just one relative price among others. A scarcity of labor would lead to higher real wages, while an exogenous increase in wages would lead to lower employment. But absent a change in the money supply, neither should have any effect on the overall price level. 

It’s worth noting here that altho Milton Friedman’s “natural rate of unemployment” is often conflated with the modern NAIRU, the causal logic is completely different. In Friedman’s story, high inflation caused low unemployment, not the reverse. In the modern story, causality runs from lower unemployment to faster wage growth to higher inflation. In the modern story, prices are set as a markup over marginal costs. If the markup is constant, and all wages are part of marginal cost, and all marginal costs are wages, then a change in wages will just be passed through one to one to inflation.

We can ignore the stable markup assumption for now — not because it is necessarily reasonable, but because it’s not obvious in which direction it’s wrong. But if we relax the other assumptions, and allow for non-wage costs of production and fixed wage costs, that unambiguously implies that wage changes are passed through less than one for one to prices. If production inputs include anything other than current labor, then low unemployment should lead to a mix of faster inflation and faster real wage growth. And why on earth should we expect anything else? Why shouldn’t the 101 logic of “reduced supply of X leads to a higher relative price of X” be uniquely inapplicable to labor?5

There’s an obvious political-ideological reason why textbooks should teach that low unemployment can’t actually make workers better off. But I think it gets a critical boost in plausibility — a papering-over of the extreme assumptions it rests on — from intuitions held over from the old monetarist view. If inflation really was just about faster money growth, then the claim that it leaves real incomes unchanged could work as a reasonable first approximation. Whereas in the markup-pricing story it really doesn’t. 

5. Inflation and the central bank.  In the quantity-of-money vision, it’s obvious why inflation is the special responsibility of the central bank. In the textbooks, managing the supply of money is often given as the first defining feature of a central bank. Clearly, if inflation is a function of the quantity of money, then primary responsibility for controlling it needs to be in the hands of whoever is in charge of the money supply, whether directly, or indirectly via bank lending. 

But here again, it seems, to me, the policy conclusion is being asked to bear weight even after the logical scaffolding supporting it has been removed. 

Even if we concede for the sake of argument that the central bank has a special relationship with the quantity of money, it’s still just one of many influences on the level of spending. Indeed, when we think about all the spending decisions made across the economy, “at one interest rate will I borrow the funds for it” is going to be a central consideration in only a few of them. Whether our vision of inflation is too much spending relative to the productive capacity of the economy, or wages increasing faster than productivity, many factors are going to play a role beyond interest rates or central bank actions more broadly. 

One might believe that compared with other macro variables, the policy interest rate has a uniquely strong and reliable link to the level of spending and/or wage growth; but almost no one, I think, does believe this. The distinct responsibility of the central bank for inflation gets justified not on economic grounds but political-institutional ones: the central bank can act more quickly than the legislature, it is free of undue political influence, and so on. These claims may or may not be true, but they have nothing in particular to do with inflation. One could justify authority over almost any area of macroeconomic policy on similar grounds.

Conversely, once we fully take on board the idea that the central bank’s control over inflation runs through to the volume of credit creation to the level of spending (and then perhaps via unemployment to wage growth), there is no basis for the distinction between monetary policy proper and other central bank actions. All kinds of regulation and lender-of-last-resort operations equally change the volume and direction of credit creation, and so influenced aggregate spending just as monetary policy in the narrow sense does.

6. The costs of inflation. If inflation is a specifically monetary phenomenon, the costs of inflation presumably involve the use of money. The convenience of quoting relative prices in money becomes a problem when the value of money is changing.

An obvious example is the fixed denominations of currency — monetarists used to talk with about “shoe leather costs” — the costs of needing to go more frequently to the bank (as one then did) to restock on cash. A more consequential example is public incomes or payments fixed in money terms. As recently as the 1990s, one could find FOMC members talking about bracket creep and eroded Social Security payments as possible costs of higher inflation — albeit with some embarrassment, since the schedules of both were already indexed by then. More broadly, in an economy organized around money payments, changes in what a given flow of money can buy will create problems. Here’s one way to think about these problems:

Social coordination requires a mix of certainty and flexibility. It requires economic units to make all kinds of decisions in anticipation of the choices of other units — we are working together; my plans won’t work out if you can change yours too freely. But at the same time, you need to have enough space to adapt to new developments — as with train cars, there needs to be some slack in the coupling between economic unit for things to run smoothly. One dimension of this slack is the treatment of some extended period as if it were a single instant.

This is such a basic, practical requirements of contracting and management that we hardly think about it. For example, budgets — most organizations budget for periods no shorter than a quarter, which means that as far as internal controls and reporting are concerned, anything that happens within that quarter happens at the same time.6Similarly, invoices normally require payment in 30 or 60 days, thus treating shorter durations as instantaneous. Contracts of all kinds are signed for extended periods on fixed money terms. All these arrangements assume that the changes in prices over a few months or a year are small enough that they can be safely ignored.can be modified when inflation is high enough to make the fiction untenable that 30, 60 or 90 days is an instant. Social coordination strongly benefits from the convention that shorter durations can be ignored for most periods, which means people behave in practice as if they expect inflation over such shorter periods to be zero.

Axel Leijonhufvud’s mid-70s piece on inflation is one of the most compelling accounts of this kinds of cost of inflation — the breakdown of social coordination — that I have seen. For him, the stability of money prices is the sine qua non of decentralized coordination through markets. 

In largely nonmonetary economies, important economic rights and obligations will be inseparable from particularized relationships of social status and political allegiance and will be in some measure permanent, inalienable and irrevocable. … In monetary exchange systems, in contrast, the value to the owner of an asset derives from rights, privileges, powers and immunities against society generally rather than from the obligation of some particular person. …

Neoclassical theories rest on a set of abstractions that separate “economic” transactions from the totality of social and political interactions in the system. For a very large set of problems, this separation “works”… But it assumes that the events that we make the subject of … the neoclassical model of the “economic system” do not affect the “social-political system” so as … to invalidate the institutional ceteris paribus clauses of that model. …

 Double-digit inflation may label a class of events for which this assumption is a bad one. … It may be that … before the “near-neutral” adjustments can all be smoothly achieved, society unlearns to use money confidently and reacts by restrictions on “the circles people shall serve, the prices they shall charge, and the goods they can buy.”

One important point here is that inflation has a much greater impact than in conventional theory because of the price-stability assumption incorporated into any contract that is denominated in money terms and not settled instantly — which is to say, pretty much any contract. So whatever expectations of inflation people actually hold, the whole legal-economic system is constructed in a way that makes it behave as if inflation expectations were biased toward zero:

The price stability fiction — a dollar is a dollar is a dollar — is as ingrained in our laws as if it were a constitutional principle. Indeed, it may be that no real constitutional principle permeates the Law as completely as does this manifest fiction.

The market-prices-or-feudalism tone of this seems more than a little overheated from today’s perspective, and when Arjun and I asked him about this piece a few years ago, he seemed a bit embarrassed by it. But I still think there is something to it. Market coordination, market rationality, the organization of productive activity through money payments and commitments, really does require the fiction of a fixed relationship between quantities of money and real things. There is some level of inflation at which this is no longer tenable.

So I have no problem with the conventional view that really high inflations — triple digits and above — can cause far-reaching breakdowns in social coordination. But this is not relevant to the question of inflation of 1 or 2 or 5 or probably even 10 percent. 

In this sense, I think the mainstream paradoxically both understates and overstates the real costs of inflation. They exaggerate the importances of small differences in inflation. But at the same time, because they completely naturalize the organization of life through markets, they are unable to talk about the possibility that it could break down.

But again, this kind of breakdown of market coordination is not relevant for the sorts of inflation seen in the United States or other rich countries in modern times. 

It’s easier to talk about the costs (and benefits) of inflation when we see it as a change in relative prices, and redistribution of income and wealth. If inflation is typically a change in relative prices, then the costs are experienced by those whose incomes rise more slowly than their payments. Keynes emphasized this point in an early article on “Social Consequences of a Change in the Value of Money.”7

A change in the value of money, that is to say in the level of prices, is important to Society only in so far as its incidence is unequal. Such changes have produced in the past, and are producing now, the vastest social consequences, because, as we all know, when the value of money changes, it does not change equally for all persons or for all purposes. … 

Keynes sees the losers from inflation as passive wealth owners, while the winners are active businesses and farmers; workers may gain or lose depending on the degree to which they are organized. For this reason, he sees moderate inflation as being preferable to moderate deflation, though both as evils to be avoided — until well after World War II, the goal of price stability meant what it said.

Let’s return for a minute to the question of wages. As far as I can tell, the experience in modern inflations is that wage changes typically lag behind prices. If you plot nominal wage growth against inflation, you’ll see a clear positive relationship, but with a slope well below 1. This might seem to contradict what I said under point 4. But my point there was that insofar as inflation is driven by increased worker bargaining power, it should be associated with faster real wage growth. In fact, the textbook is wrong not just on logic but on facts. In principle, a wage-driven inflation would see a rise in real wage. But most real inflations are not wage-driven.

In practice, the political costs of inflation are probably mostly due to a relatively small number of highly salient prices. 

7. Inflation and production. The old monetarist view had a fixed quantity of money confronting a fixed quantity of goods, with the price level ending up at whatever equated them. As I mentioned above, the fixed-quantity-of-money part of this has been largely abandoned by modern mainstream as well as heterodox economists. But what about the other side? Why doesn’t more spending call forth more production?

The contemporary mainstream has, it seems to me, a couple ways of answering the question. One is the approach of a textbook like Blanchard’s. There, higher spending does lead to to higher employment and output and lower unemployment. But unless unemployment is at a single unique level — the NAIRU — inflation will rise or fall without limit. It’s exceedingly hard to find anything that looks like a NAIRU in the data, as critics have been pointing out for a long time. Even Blanchard himself rejects it when he’s writing for central bankers rather than undergraduates. 

There’s a deeper conceptual problem as well. In this story, there is a tradeoff between unemployment and inflation. Unemployment below the NAIRU does mean higher real output and income. The cost of this higher output is an inflation rate that rises steadily from year to year. But even if we believed this, we might ask, how much inflation acceleration is too much? Can we rule out that a permanently higher level of output might be worth a slowly accelerating inflation rate?

Think about it: In the old days, the idea that the price level could increase without limit was considered crazy. After World War II, the British government imposed immense costs on the country not just to stabilize inflation, but to bring the price level back to its prewar level. In the modern view, this was crazy — the level of prices is completely irrelevant. The first derivative of prices — the inflation rate — is also inconsequential, as long as it is stable and predictable. But the second derivative — the change in the rate of inflation — is apparently so consequential that it must be kept at exactly zero at all costs. It’s hard to find a good answer, or indeed any answer, for why this should be so.

The more practical mainstream answer is to say, rather than that there is a tradeoff between unemployment and inflation with one unambiguously best choice, but that there is no tradeoff. In this story, there is a unique level of potential output (not a feature of the textbook model) at which the relationship between demand, unemployment and inflation changes. Below potential, more spending calls forth more production and employment; above potential, more spending only calls forth higher inflation. This looks better as a description of real economies, particular given that the recent experience of long periods of elevated unemployment that have not, contrary to the NAIRU prediction, resulted in ever-accelerating deflation. But it begs the question of why should be such a sharp line.

The alternative view would be that investment, technological change, and other determinants of “potential output” also respond to demand. Supply constraints, in this view, are better thought of in terms of the speed with which supply can respond to demand, rather than an absolute ceiling on output.

Well, this post has gotten too long, and has been sitting in the virtual drawer for quite a while as I keep adding to it. So I am going to break off here. But it seems to me that this is where the most interesting conversations around inflation are going right now — the idea that supply constraints are not absolute but respond to demand with varying lags — that inflation should be seen as often a temporary cost of adjustment to a new higher level of capacity. And the corollary, that anti-inflation policy should aim at identifying supply constraints as much as, or more than, restraining demand. 

Slow recoveries, endogenous growth and macroprudential policy

Published by Anonymous (not verified) on Tue, 27/07/2021 - 6:00pm in

Dario Bonciani, David Gauthier and Derrick Kanngiesser

Following the global financial crisis in 2008, central banks around the world introduced tighter banking regulations to increase the resilience of the financial sector and reduce the risks of severe financial disruptions during economic downturns. This fact has motivated a large body of literature to assess the role that macroprudential (MacroPru) policies play in mitigating the severity of recessions. One common finding is that the benefits of MacroPru are relatively minor within standard dynamic stochastic general equilibrium (DSGE) models. In a new paper, we show that MacroPru becomes significantly more important in a model that accounts for the long-term negative consequences of financial disruptions.

Empirical evidence on the long-run impact of financial shocks

To motivate our theoretical analysis, we first provide empirical evidence on the effects of financial crises for a panel of 24 advanced economies. We document an important difference between financial crises and traditional recessions. As displayed in Chart 1 we find that, following a banking crisis, total factor productivity (TFP), GDP, and research and development (R&D) substantially decline and do not revert to their pre-crisis level within 10 years, in line with previous literature. By contrast, other types of recessions are associated with milder and short-lived contractions in real activity.

Chart 1: Estimated effects of banking crisis and other recessions

A model of MacroPru with endogenous growth

We study the implications of MacroPru through the lens of a medium-scale DSGE model, into which we incorporate frictions in the financial sector and endogenous productivity growth. Financial intermediaries, built along the lines of Gertler et al (2012), fund themselves using short-term debt and outside equity. The cost of outside equity depends on the state of the world and moves in line with the return on assets. The risk exposure of financial intermediaries is, therefore, a result of their financing choice. We model MacroPru as a subsidy on outside equity, which increases the resilience of financial intermediaries to shocks adversely affecting asset prices and net worth.

Our approach to modelling MacroPru captures two important real-world features. First, the key objective of MacroPru is to avoid banks taking on too much debt, which may be particularly risky during economic downturns. In reality, this policy often takes the form of minimum capital or liquidity requirements. These two policy measures are however difficult to model in the context of a standard DSGE. Second, in line with the regulatory frameworks of many countries, the macroprudential intervention in our model is countercyclical, ie it becomes tighter when the (private) cost of debt is low and banks would, therefore, have the incentive to substantially increase their leverage.

The second key feature of our model is an endogenous growth mechanism in the spirit of Grossman and Helpman (1991) and Aghion and Howitt (1992). The labour-augmenting productivity of intermediate output firms depends on the aggregate level of intangible capital or ‘knowledge’. This additional form of capital implies that the production function will feature increasing returns to scale and that the growth rate of the real variables in the model will depend on the rate of accumulation of intangible capital in the economy.

When a financial shock hits, there is a substantial fall in investment in both physical and intangible capital. This causes an initial drop in productivity growth, which damages intangible capital formation, and hence causes a permanent fall in output. By contrast, shocks originating outside the financial sector do not tighten financing conditions in the economy as much. Intangible investment (eg R&D) and hence productivity growth fall less in response to standard adverse demand and supply shocks. By facilitating the flow of credit towards investment, MacroPru positively impacts productivity growth and the long-term level of real activity. This stands in contrast with a model of exogenous growth where long-run growth is constant, hence limiting the potential role of MacroPru.

Financial intermediaries in our model take asset prices as given. The failure to recognise the external benefits associated with more stable asset prices constitutes an inefficiency that warrants a macroprudential policy intervention by providing additional incentives to rely more on equity finance. Monetary policy would not affect the cost difference between debt and equity and is therefore unsuited to nudge banks towards higher capital ratios.

Revisiting the gains from MacroPru

Following an adverse financial shock, we find that MacroPru can roughly halve the slowdown in productivity growth and the size of the long-run output hit. Accounting for its potential long-term benefits, we find that MacroPru improves household welfare by approximately 7% compared to the unregulated scenario. This result is around 10 times larger than commonly found in the existing literature using models without the endogenous growth mechanism. In our model, we find an optimal bank capital ratio of about 18%, which is roughly 4 percentage points higher than under exogenous growth.

We also highlight that MacroPru significantly reduces the probability of the monetary policy rate reaching the zero lower bound (ZLB). This probability is about 1.1 per cent in the absence of MacroPru and zero under MacroPru regulation. As shown in Chart 2, without MacroPru (blue and green lines) an adverse financial shock causes a substantial contraction in real activity and the economy’s growth rate. The output losses become particularly severe when monetary policy is unable to respond to the fall in demand due to a binding ZLB constraint (green line). In the presence of MacroPru regulation, the financial system is more resilient, and asset prices fall less. This mitigates the tightening in credit conditions and significantly eases the fall in demand. As a result, the policy interest rate never reaches the ZLB constraint and the output losses are significantly smaller both in the short and the long-term.

Chart 2: Response to a credit contraction

Policy implications

Our work highlights the importance of taking the long-term costs of financial crises into account when assessing the benefits of macroprudential policy. The surprisingly small welfare gains commonly found in the theoretical literature are a consequence of ignoring long-term effects and endogenous growth channels. Because productivity growth and the balanced growth path of the economy are endogenous and subject to financial shocks, this justifies a stronger macroprudential response.

Dario Bonciani, David Gauthier and Derrick Kanngiesser work in the Bank’s Monetary Policy Outlook Division.

If you want to get in touch, please email us at bankunderground@bankofengland.co.uk or leave a comment below.

Comments will only appear once approved by a moderator, and are only published where a full name is supplied. Bank Underground is a blog for Bank of England staff to share views that challenge – or support – prevailing policy orthodoxies. The views expressed here are those of the authors, and are not necessarily those of the Bank of England, or its policy committees.

Unemployment risk, liquidity traps and monetary policy

Published by Anonymous (not verified) on Tue, 06/07/2021 - 6:00pm in

Dario Bonciani and Joonseok Oh

The Global Financial Crisis in 2008 caused a significant and persistent increase in unemployment rates across major advanced economies. The worsening in labour market conditions increased uncertainty about job prospects, which potentially gave rise to precautionary savings, putting further downward pressure on real economic activity and prices. Moreover, in response to the severe drop in demand, central banks worldwide cut short-term nominal interest rates, which rapidly approached the zero lower bound (ZLB), where they remained for a prolonged time. In a recent paper, we show that committing to keep the interest rate at zero longer than implied by current macroeconomic conditions is particularly effective at easing contractions in demand in the presence of countercyclical unemployment risk and low interest rates.

A model with uninsurable unemployment risk

We study optimal monetary policy conduct through the lens of a Heterogeneous Agents New Keynesian (HANK) model with frictions in the labour market, imperfect unemployment insurance, and an occasionally binding ZLB constraint (ie the interest rate may hit the ZLB during a downturn). The model features two types of households: workers and firm owners, though we abstract from the distributional effects of monetary policy in the paper. Workers face the risk of unemployment and a lower income. The presence of idiosyncratic unemployment risk (the possibility of becoming unemployed, which rises in a downturn) leads to a precautionary savings motive for employed workers. Firm owners, instead, do not face any unemployment risk.

We study the impact of monetary policy in response to a negative demand shock that leads the economy into a liquidity trap. We first analyse the economic outcomes when the central bank only responds to current inflation (strict-inflation targeting), comparing the cases with perfect and imperfect unemployment insurance. Given this benchmark, we then study how the economy responds when the central bank follows the optimal monetary policy and can credibly commit to keeping the interest rate ‘lower for longer’ (often referred to as Odyssean forward guidance).

Finally, we study whether simple policy rules can provide results in line with those under optimal monetary policy. In particular, we consider: (i) a Taylor rule augmented with the lagged value of the shadow policy rate (inertial policy rule); (ii) a Price-Level-Targeting (PLT) rule; (iii) an Average-Inflation-Targeting (AIT) rule.

What we find

Under strict-inflation-targeting, the adverse demand shock has significantly stronger effects under imperfect unemployment insurance (ie when unemployed workers are only partially compensated for their income loss). This is because the fall in demand reduces job creation and raises unemployment risk, which induces households to increase their savings for precautionary reasons. The precautionary-savings effect leads to a stronger fall in inflation and inflation expectations. Since the nominal interest rate is stuck at zero (and there are no other monetary tools in our model), the real interest rate rises, putting further downward pressure on consumption and output.

Under the optimal policy, instead, the central bank responds to the contraction in demand by committing to hold the policy rate at zero longer than implied by current economic conditions. This policy has the effect of increasing inflation expectations and reducing the real interest rate. With the interest rate being kept ‘lower for longer’, agents expect improvements in labour market conditions, which reduces their precautionary saving behaviour in the presence of imperfect unemployment insurance. As a result, market incompleteness (ie imperfect insurance) amplifies the rise in inflation expectations and the reduction in the real interest rate, thereby mitigating the decline in real activity. Specifically, when the central bank sets an optimal path for the policy rate, an adverse demand shock causes smaller contractions in real economic activity under incomplete markets than under perfect unemployment insurance.

Under the three simple rules, there is history dependence in the nominal policy rate: a fall in inflation today leads the policy rate to stay at zero for longer than current conditions alone would imply.  As a result, all three rules are particularly effective under imperfect insurance. However, unlike the optimal-policy case, these rules do not fully neutralise the fall in inflation expectations caused by the rise in unemployment risk and precautionary savings.

Policy implications

The paper shows that, if the central bank can commit to holding interest rates lower for longer, then such a policy can be particularly effective in the presence of precautionary savings due to higher uninsurable unemployment risk. Within our model, optimal monetary policy can completely offset the deflationary spirals arising from an increase in precautionary savings. Under simpler and more realistic policy rules, the central bank is still able to significantly mitigate the fall in demand. We conclude that, in practice, monetary policy and unemployment insurance policies are necessary tools to stabilise output in response to demand contractions at the ZLB. By reducing the fall in income associated with unemployment, such insurance policies reduce the precautionary savings motive, which in turn reduces the amplification of negative shocks and risk of being stuck in a liquidity trap.

Dario Bonciani works in the Bank’s Monetary Policy Outlook Division and Joonseok Oh works at Freie Universität Berlin.

If you want to get in touch, please email us at bankunderground@bankofengland.co.uk or leave a comment below.

Comments will only appear once approved by a moderator, and are only published where a full name is supplied. Bank Underground is a blog for Bank of England staff to share views that challenge – or support – prevailing policy orthodoxies. The views expressed here are those of the authors, and are not necessarily those of the Bank of England, or its policy committees.

Pages