macroeconomics

Error message

Deprecated function: The each() function is deprecated. This message will be suppressed on further calls in _menu_load_objects() (line 579 of /var/www/drupal-7.x/includes/menu.inc).

Global real rates, 1311–2018

Published by Anonymous (not verified) on Tue, 07/07/2020 - 6:00pm in

Paul Schmelzing

Paul Schmelzing is an academic visitor to the Bank of England, currently based at Yale University. In this guest post, he summarises his research on the differential between real interest rates and real growth rates over the past seven centuries…

There is a lively academic and policy debate about whether a build-up of excess savings in advanced economies has created a drag on long-term interest rates. In a recent paper, I provide new context to these discussions. I construct a long-run advanced economy (DM) public real interest rate series geographically covering 78% of DM GDP since the 14th century. Using this series, I argue that current interest rate trends cannot be rationalized in a “secular stagnation” framework that has been “manifest for two decades”. Rather, historical data illustrates that advanced economy real rates have steadily declined for more than five centuries, despite important reversal periods. This post draws from long-run economic history to provide additional insights concerning the current interest rate environment.

Public rates, nominal and real

Chart 1 displays the new 707-year series in nominal and real terms compared to two existing series. The two existing series include: the “lowest-yielding” semi-centennial trend in Sidney Homer’s and Richard Sylla’s classic book (green series), and data from a recent compilation by Huang, Chilosi and Sapoznik (2019) featuring municipal annuities between the 14th–18th centuries (red series, in which I arithmetically weigh all their nominal data points).

The yellow and blue series display the new “global” data in nominal and real terms: these have to be constructed from a wide array of scattered archival and printed sources, and should be thought of as a DM public long-term debt series that incorporates both consolidated and unconsolidated voluntary lending across the full risk spectrum. In this sense, they are comparable to the “global” series presented by King and Low (2014), with the key differences being that I measure ex post (rather than ex ante) inflation, and GDP-weigh my sample throughout.

Chart 1: GDP-weighted “global” public real rates and previous series, 1311–2018

The data are less suited to illustrate shorter-term, cyclical-level fluctuations: except for some geographies, early modern cyclical dynamics remain simplistic given my need for interpolations. However, robustness increases at the decadal level, and I argue that the compilation goes substantially beyond existing aggregated series, thus serving to illustrate more granular long-run dynamics. For instance, we can observe the apparent existence of two secular reversal periods, during which public capital markets broke from their general downward trend and entered lasting periods of higher real rates, ca.1320–1480, and ca. 1520–1650, highlighted in Chart 2. The first episode coincides with the Hundred Years War, the European “Bullion Famine”, and raging Condottiere warfare in Northern Italy. The second covers the “Triple Default of 1557–1559”, the long global wave of bank and merchant defaults, and accelerating population growth from the second half of the 16th century.

Chart 2: Public R-Reversals, advanced economies, ca.1320–1480s and ca.1520s–1650s

Safe(r) real rates: evidence from private long-term debt

Aggregated series such as those in Charts 1 and 2 incorporate premia over the “safe” rate, which reflects default, inflation, or debasement expectations. Is it possible to obtain a series closer to the “safe” rate? One way to tackle this is to construct an (ex post) default-free series of leading political powers with strong commitment mechanisms over time, an approach discussed elsewhere. Chart 3 on the other hand displays German private long-term mortgage contracts (today’s Pfandbriefe) since the 13th century, in nominal and real terms, sourced from a variety of municipal archives ranging from Frankfurt over Mainz to Cologne, plus printed sources. It has been argued that such contracts represent a useful approximation of a “risk free” financial asset over time, since they were secured by real estate, fully transferable, and exempted from any usury laws.

Such rates trade notably below public rates during 1300–ca.1650, though one has to remember that they are not of course entirely riskless (there being no truly riskless asset). I similarly observe that such private, relatively safe rates have been falling since the late Middle Ages – more steadily and less aggressively (by -0.6 basis points p.a. in real terms) than public series, but similarly unimpressed by major historical institutional and regime changes.

Chart 3: German nominal and real mortgage rates, and “private R”-G

A derivative measure, “R-G”, has attracted interest in the context of debt sustainability debates recently. R-G is the difference between the real long-term interest rate (R here is not referring to the general return on wealth debated in the context of inequality trends) and real GDP growth. I reconstruct a new, long-term R-G by combining the new “R” with existing GDP data. Though early modern German GDP estimates are less robust than similar ones for Britain, Spain, or Italy, there is a common trend in each country R-G series. Namely, the presently depressed R-G environment as discussed by Blanchard or Barrett (2018) has been long in the making, and the general decline in the spread has not previously been permanently reversed by exogenous shocks (note the “Kipper and Wipper” plunge of 1619–1621) beyond the short-term. Secularly, advanced economies have in this sense continuously improved debt sustainability: by roughly 1 basis points (bps) per annum in the case of “private R”-G, with the spread itself averaging just under 2.5% over the past 500 years (Chart 3).

Historical versus “neutral” rates

Willem Buiter sensibly pointed out that even the narrower “safe R” approximations I use (be it the default-free public R sample, or the private R series) may not be consistent with the “neutral rate” concept often employed in the context of the secular stagnation debate. Indeed, since the measurement of “neutral” rates even for recent years is not entirely straightforward, its construction for early modern markets presents additional challenges. However, since I isolate tradable, transferable, and repression-free instruments, with a meaningful bias on relatively financialised municipalities, a smoothed version of the “safe public R” or the “global R” series should allow us to reasonably approximate a steady state condition. Jordà, Singh and Taylor’s exercise along these lines on the basis of my data appears to suggest exactly that: “neutral” real rates have just as well declined since the Renaissance, and on this basis, too, developments since the 1980s equally mark a “return to historical trend”.

Chart 4 now illustrates these points by grouping all real rate data according to three broad risk profiles – the “safe” private series, an ex-post default-free public series tracing the leading economy over time, and a “risky” public real rate series, which includes historical default events. A finer dissemination is certainly desirable, but once more these exercises keep suggesting that simply ignoring long-run real interest rate trends by invoking “risk premia” or “neutral rate” dynamics is too simplistic, and the idea that real interest rates have historically trended around a “normal”, “stable” level correlated to factors such as GDP growth, remains highly doubtful.

Chart 4: R trends across three risk classes, advanced economies, 1311–2018

Liquidity and market integration back then

Similarly, GDP-weighting and aggregating separate municipal-level and country-level observations should not obscure the fact that financial market integration was far from perfect in the medieval and early modern periods. This is despite evidence of significant progress during the 15th century towards financial market integration, and despite the fact that the debt contracts considered here were overwhelmingly denominated in international gold currency, the Florentine Florin, or its German derivative, the Rheingulden.

It is often assumed that serious market depth for public debt in advanced economies was only achieved in 17th century Holland and 18th century Britain. High levels of pre-1700 interest rates are against that backdrop often assumed to reflect in part prohibitive levels of illiquidity. However, actual evidence is more nuanced. No doubt, various early modern geographies fall short of even remotely comparable financial market depth, particularly in peripheral European towns and regions. But on the other hand, political bodies in Northern Italy, the Holy Roman Empire, France, and Flanders often showed real per capita levels of debt stocks and annual funded debt issuance exceeding those of 18th century England or 17th century Holland. In general, it appears rather underestimated how “financialised” the pre-1700 era already was. To that end, Chart 5 calculates real per capita funded debt stocks in a range of “advanced economies” between 1313–1500, benchmarking against the 1752 English real consolidated debt stock per capita. Full sources to these calculations are available here. We note that the majority of the geographies included in our sample at least reach a 65–80% debt level of our benchmark, and often comfortably more. A handful of geographies fall below such a level – but even if we decided to omit such issuers entirely and imposed a strict 80% threshold, our aggregate trends would only be little changed.

Chart 5: Real funded debt stock per capita outstanding, selected locations versus 1752 British levels, 1313–1500

Where next?

With recourse to new archival and printed data, I show that real interest rates trended downwards, typically in a range of 0.5-2bps p.a., since at least the days of Charles VIII’s triggering of the Italian Wars in the 1490s and across geographies, monetary and financial regimes. Such a statement obscures much historical and empirical nuance, but the new evidence is able to qualify “secular stagnation” propositions: the trend fall since the 1980s appears much less of an aberration once we factor in deep history. Liquidity and other risk premia components are of substantial importance in the early modern period, but they cannot fully explain the downward trend fall over time. I suggest that there must remain additional long-run drivers working on advanced economy public and private markets across the DM risk spectrum for half a millennium now. Significant short-term fiscal or monetary responses, or fundamental changes to the international financial regime (such as the transition to fiat money), have often caused sharp initial volatility (and have historically by no means been “costless”), but they have not thus far permanently upended these long-run drivers.

If you want to get in touch, please email us at bankunderground@bankofengland.co.uk or leave a comment below.

Comments will only appear once approved by a moderator, and are only published where a full name is supplied. Bank Underground is a blog for Bank of England staff to share views that challenge – or support – prevailing policy orthodoxies. The views expressed here are those of the authors, and are not necessarily those of the Bank of England, or its policy committees.

UK productivity growth from 2008 to 2018: weakness was structural, not cyclical

Published by Anonymous (not verified) on Fri, 03/07/2020 - 6:00pm in

Marko Melolinna

Monetary policy makers need to know whether the economy is operating above or below its supply capacity. If the economy is operating above its supply capacity, inflation is likely to rise, and vice versa. A crucial component of supply capacity is the labour productivity trend but we cannot observe this directly. We have to estimate it. Thankfully, there are ways of splitting observed macroeconomic time series into estimated trend and cyclical components. Using a variety of methods on UK data, I find that UK productivity growth over the period 1991 to 2018 has been structurally, rather than cyclically, weak since the financial crisis. And, UK trend productivity has been strongly correlated with trend productivity in other advanced economies.

In a recent Working Paper, my co-author and I studied the recent weakness in labour productivity in the United Kingdom. The black line in Chart 1 shows that labour productivity – defined as output per hour worked – has stagnated since around 2008. This stagnation has received much attention in the blogosphere, including in blogs by Kimball et al (2013), Lewis (2018), Schneider (2018) and Wren-Lewis (2017). We wanted to see if structural or cyclical components of the data could better explain this weakness. One popular method for splitting data into its trend and cycle components is to use unobserved components models (UCM). (For some examples of this see Morley et al (2003), Mitra and Sinclair (2012) and Grant and Chan (2017).) We introduce a set of one-variable (univariate) and two-variable (bivariate) models that allow us to estimate trends and cycles for the productivity data.

The univariate models we use only have UK productivity as an observable time series, while we add productivity in peer economies in the bivariate versions. We also use Bayesian techniques to estimate the models. Bayesian techniques allow us to feed prior information into the estimation. For example, we could have a view on the size of a particular parameter of the model (like how productivity depends on its own lagged value), and we can express this as a prior for the model. We can also express our views on how confident we are in our prior beliefs by adjusting the ‘tightness’ of the prior: the more confident we are, the tighter the prior. Bayesian estimation then lets the actual data either move the estimated (posterior) parameter value away from the prior or keep it close to it. It does this by maximising the likelihood of the data, given the priors and the model that we are using.

Furthermore, in these types of models, it is possible to model directly the correlation between shocks to the trend and the cycle. People have different views about this issue. Some think that the trend is a long-term phenomenon uncorrelated with short-term cyclical variations around the trend. But others think that trends and cycles can be correlated, either positively or negatively. The correlation could be positive, for example, if cyclical shocks have a persistent effect on the trend. Arguably, this happened in the financial crisis in the 2000’s. On the other hand, the correlation could be negative. For example, a technological improvement could imply higher trend productivity immediately (ie, a positive shock to the trend). But, if actual productivity only catches up later, this would look like a negative cyclical shock.

The framework we use has the advantage that we can calculate the likelihood of each model and so assess our choice of priors. This turns out to be very important. Our priors for the correlation between, and the relative volatility of, trend and cycle shocks turn out to affect the smoothness of the estimated trend, our main interest. In our data, if the prior is set to be consistent with a smooth trend, then we find that non-correlated UC models, with no parameterised correlation between the trend and cycle shocks, or correlated models with strongly smoothed trends are the likeliest to fit the data. This typically results in a relatively smooth estimated trend. On the other hand, if the prior allows for a more volatile trend, then by far the most likely models are generally the ones allowing for correlation between the trend and the cycle shocks, rather than non-correlated UC models. This causes the resulting estimated trend to be relatively more volatile.

An example of two trends in a univariate model allowing for correlation between the trend and the cycle shocks is shown in Chart 1. By varying the prior for the relative volatility of the trend and the cycle, we can obtain very different estimates of the trend. If we set a relatively smooth trend as a prior, the estimated trend is nearly a straight line. At the other extreme, if we set a relatively volatile trend, the estimated trend is near the data. This shows that one needs to be very careful when setting priors in these types of models.

Chart 1: UK productivity data and its trend from selected univariate model specifications (1991 Q1=100)

Notes: Productivity defined as UK GDP divided by total hours worked in the UK economy. The red and green lines show different estimated trends for this series based on two alternative priors for the volatility of shocks to this trend in a correlated UCM.

Sources: ONS and author’s calculations.

Whichever of our models is used, our evidence suggests that the trend productivity growth rate in the United Kingdom has been substantially weaker since the financial crisis (Chart 2). This result is consistent with other studies finding an important role for structural drivers in explaining UK productivity dynamics in recent years. (See, for example, Goodridge et al (2016) and Oulton and Sebastiá-Barriel (2017).) Given that our approach is more focused on empirical observations about aggregate trends than most other studies on this topic, it is encouraging that the main conclusion is similar.

We also find evidence that there is a significant positive correlation between shocks to UK trend productivity and those of other advanced economies. These positive correlations between trends appear to have become stronger since the financial crisis, which is consistent with the view of synchronised global shocks affecting macroeconomic dynamics more now than before the crisis (see Chart 3).

Chart 2: Average pre and post-crisis trend productivity growth rates (year-on-year %) – selected models

Note: The chart shows the range of highest and lowest average trend productivity growth rates from four different UCM models on UK productivity.

Source: Author’s calculations.

Chart 3: Trend correlation coefficient for UK and US productivity

Notes: The chart shows the rolling correlation coefficient of UK and US productivity trend shocks from a bivariate UCM. Dashed lines are 90% confidence intervals.

Source: Author’s calculations.

Our results are consistent with a relatively pessimistic view of post-financial crisis productivity dynamics in the UK. The weakness of trend productivity growth appears to be consistent with a secular, long-term stagnation type narrative. All is not lost though – things can change quickly. It is possible that positive real economy shocks could quickly lead to an improvement in trend productivity. (See, for example, Carney (2018) and Haldane (2018) for remarks on the effects of technological progress.) More structural models and views on future technological progress are needed to formulate forecasts for that; our model – like any time series model, no matter how sophisticated – is a reflection of past, not future data dynamics. But, it can still help monetary policy makers know where the economy is operating relative to its current supply potential. Improving the economy’s supply potential is likely to require structural policies that are beyond their scope.

Marko Melolinna works in the Bank’s Structural Economics Division.

If you want to get in touch, please email us at bankunderground@bankofengland.co.uk or leave a comment below.

Comments will only appear once approved by a moderator, and are only published where a full name is supplied. Bank Underground is a blog for Bank of England staff to share views that challenge – or support – prevailing policy orthodoxies. The views expressed here are those of the authors, and are not necessarily those of the Bank of England, or its policy committees.

Monetary policy and US housing expansions: what can we expect for the post-COVID-19 housing recovery?

Published by Anonymous (not verified) on Tue, 23/06/2020 - 6:00pm in

Bruno Albuquerque, Martin Iseringhausen and Frédéric Opitz

The fall in aggregate demand due to the COVID-19 shock has brought the eight-year long US housing market expansion to a halt. At the same time, the Federal Reserve and the US Government have deployed significant resources to support households and businesses. These actions should help weather the ongoing crisis and lay the seeds for the next recovery. It is, however, highly uncertain how the post-COVID-19 housing recovery will look. Using a time-varying parameter (TVP) model on US aggregate data, our results suggest that the next housing recovery may exhibit similar features to the 2012-19 expansion: a sluggish response of housebuilding to rising demand, but a strong response of house prices.

Why is housing important?

Housing is a very different asset from any other financial asset (eg equities or bonds). The home owner owns not only the dwelling itself but also the land where it sits on. Housing can also be seen as a consumption good, in the sense that the owner consumes the housing services of living there. Moreover, housing is of particular interest for economists and policymakers alike. First, housing is the main asset for the majority of households. Over 65% of US households own a home, while only around 24% own directly stocks and bonds. In addition, the distribution of housing assets is less skewed towards the richest, which contrasts with financial assets: households in the bottom 80% of the income distribution hold roughly 43% of total real estate wealth in the economy, but only 11% of total stocks and bonds. Second, real estate accounts for a substantial fraction of economic activity, with private residential investment accounting for 4% of GDP, and housing consumption accounting for 20% of total private consumption expenditures. Third, recent research has found that the marginal propensity to consume (MPC) out of housing wealth is much larger than that out of financial wealth.

The prominent role of housing in the economy has generated a lot of interest about how shocks to demand affect house prices and the real economy more generally. For instance, an expansion in monetary policy can stimulate housing demand via lower borrowing costs — households typically finance housing with mortgage debt. Apart from this credit channel, monetary policy can also influence housing demand through the collateral or refinancing channel, whereby easier monetary conditions and higher house prices lead to higher home equity. This additional equity allows households to increase borrowing to finance consumption.

Housebuilding in the recent recovery had been remarkably different from past episodes

House prices in the United States had been increasing at a strong pace in the years before the pandemic shock struck. But despite the strong expansion in house prices, housebuilding activity had been relatively weak compared to the strong housing boom over 1996-2006.

To place the dynamics of the housing market and economic activity into historical perspective, we compare real house prices, building permits, and real GDP across the past four housing expansions. We use the Harding-Pagan algorithm to identify turning points in real house price cycles since the mid-1970s. We identify four housing expansions: 1975 M9–1979 M7, 1982 M11–1989 M3, 1996 M12–2006 M4, and 2012 M3–2019 M12. These cycles are broadly in line with the housing literature. We scale real GDP and house prices to be 100 at the beginning of each housing expansion, so that we can easily compare the dynamics across indicators. We show building permits as a percentage of the housing stock at the beginning of each cycle. To smooth out strong fluctuations in the monthly data, we show quarterly data in Figure 1.

The pace of house price appreciation displays a broadly similar pattern across housing expansions, with the exception of the 1982-89 period (Figure 1). The recent recovery that started in early-2012, and that may have possibly ended in early-2020, is particularly similar to the one in the run-up to the Great Financial Crisis (GFC). By contrast, housebuilding activity, as illustrated by the number of building permits approved for construction, stands out in the 2012-19 expansion: the flow of permits averaged 0.9% of the initial housing stock in 2012, which compares with an average of 1.6%-2.0% during the previous expansions. Research has found that the sluggish pace of housebuilding is linked to a decline in supply elasticities, which in turn is likely related to the tightening in land-use regulation over the past decades. Declining supply elasticities imply that an increase in housing demand may be absorbed more by house prices.  Finally, while the GDP dynamics have been relatively similar across the first three housing expansions, the economic performance during the most recent recovery has been subdued. In the light of these changes in the housing market, we ask whether the transmission of housing demand to the housing market has changed over time. We use monetary policy shocks as a proxy for shifts in housing demand to investigate this question.

Figure 1: Housing expansions since 1975

Sources: Bureau of Economic Analysis, Census Bureau, Federal Housing Finance Agency, and authors’ calculations. Notes: Real GDP and real house prices are scaled to be 100 at the beginning of each housing expansion. Building permits refers to its flow as % of the housing stock at the beginning of each expansion. The horizontal axis shows quarters around the beginning of each expansion, and the vertical line at zero is the starting point.

Model results

We apply a flexible time-varying parameter (TVP) vector autoregressive (VAR) model that is largely data-driven in modelling changes in the dynamic relationship between variables. We estimate the TVP-VAR model over 1991 M1–2019 M6 using seven variables: monthly GDP from Macroeconomic Advisers, CPI, the policy rate measured by a shadow rate to account for the Zero Lower Bound, building permits, house prices, the GZ spread, and bank credit. A model with these variables produces dynamic responses that are supported by standard macroeconomic theory. We use state-of-the-art high-frequency monetary policy shocks obtained from market surprises on intra-daily data on Fed future contracts following monetary policy announcements.

A monetary policy shock that decreases the policy rate stimulates housing demand and therefore raises output, building permits, and house prices (Figure 2). But while the response of GDP seems to be overall constant over time, the responses of house prices and permits exhibit substantial time variation. In particular, house prices have become more responsive since the GFC, reaching levels similar to those during the pre-crisis period. In contrast, although the response of permits has recovered after the GFC, it remains well-below historical averages.

Figure 2: Responses to an expansionary monetary policy shock over time

Notes: The figure shows the cumulative median time-varying responses of real house prices, building permits, and real GDP to an expansionary monetary policy shock that decreases the policy rate by 25 bp.

We investigate this time variation further by looking at the relative responses between permits and house prices for one and three years after a monetary policy shock. These relative responses allow us to gain insights into the dynamic relationship between permits and house prices; the percentage change in permits for each percentage point change in house prices. The relationship between permits and house prices seems rather stable since the early-90s and until the GFC. After the GFC, we find evidence of a substantial decline in the relative response of permits for a given percentage change in house prices: a drop of around 15% in the relative response after one year, and of around 20% in the relative response after three years (upper panel of Figure 3). This finding is consistent with the notion that construction has been reacting less to changes in demand, ie that US housing supply elasticities have declined.

We also look at the relative percentage increase in GDP for each percentage increase in house prices, akin to the concept of a sacrifice ratio. We find that the transmission of monetary policy to real activity relative to house prices has weakened since around 2013. Our estimates show that an expansionary monetary policy shock in 2019 M6 raised house prices relatively more than output. The experience around the GFC stands in stark contrast. This was a period when the measures adopted by the Federal Reserve to fight the crisis seemed to have supported economic activity more relative to house prices (bottom panel of Figure 3). One possible interpretation of our results is that a shock to housing demand that is increasingly absorbed by house prices, rather than supply, may contribute less to overall economic output. This point relates to recent research arguing that the low interest rate environment in the aftermath of the GFC may have rendered monetary policy less effective for economic activity.

Figure 3: Relative impulse responses to an expansionary monetary policy shock

Notes: The figure shows the cumulative time-varying responses to an expansionary monetary policy shock of building permits and real GDP relative to real house prices. We show the responses for 12 and 36 months after the shock. The grey bars represent US recessions as defined by the NBER.

Implications for the post-COVID-19 recovery

The COVID-19 pandemic and the resulting economic contraction are likely creating challenges for the housing market. The next economic recovery may be very different from previous ones — a recovery from a recession marked by simultaneous supply, demand and uncertainty shocks. Our results suggest, however, that the next housing recovery may exhibit similar features as those of the 2012-19 housing expansion: a sluggish response of housebuilding to rising demand but a strong price reaction.

Bruno Albuquerque works in the Bank’s Macro-Financial Risks Division and
Martin Iseringhausen and Frédéric Opitz work at Ghent University.

If you want to get in touch, please email us at bankunderground@bankofengland.co.uk or leave a comment below.

Comments will only appear once approved by a moderator, and are only published where a full name is supplied. Bank Underground is a blog for Bank of England staff to share views that challenge – or support – prevailing policy orthodoxies. The views expressed here are those of the authors, and are not necessarily those of the Bank of England, or its policy committees.

Weekly Top 5 Papers Jun 8 to 14, 2020

Published by Anonymous (not verified) on Tue, 16/06/2020 - 10:50am in

Top 5 Papers, based on downloads from 06/08/2020 to 06/14/2020

#
ID
Abstract Title
Authors
Affiliations
Downloads

1
3623820
An Inconvenient Fact: Private Equity Returns & The Billionaire Factory
Ludovic Phalippou
University of Oxford – Said Business School
2416

2
3557504
Economic Effects of Coronavirus Outbreak (COVID-19) on the World Economy
Nuno Fernandes
University of Navarra, IESE Business School
2372

3
3621475
Do shifts in late-counted votes signal fraud? Evidence from Bolivia
Nicolás Idrobo, Dorothy Kronick, Francisco Rodríguez
University of Pennsylvania, School of Arts & Sciences, Department of Political Science, Students University of Pennsylvania Tulane University
1300

4
3547729
The Global Macroeconomic Impacts of COVID-19: Seven Scenarios
Warwick J. McKibbin, Roshen Fernando
Australian National University The Australian National University; Centre of Excellence in Population Ageing Research (CEPAR)
1146

5
3585561
Patterns of COVID-19 Mortality and Vitamin D: An Indonesian Study
Prabowo Raharusun, Sadiah Priambada, Cahni Budiarti, Erdie Agung, Cipta Budi
Independent
760

Share

Covid-19 Briefing: International Trade and Supply Chains

Published by Anonymous (not verified) on Thu, 11/06/2020 - 6:00pm in

Rebecca Freeman and Rana Sajedi

The Covid-19 pandemic has led to both a decline in economic activity that has been propagated across borders through global supply networks, and a rise in barriers to trade between countries. This has led to a rapidly emerging literature seeking to understand the effects of the pandemic on trade. This post surveys some of the key contributions of that literature. Key messages from early papers are that: i) The shock is a hit to both demand and supply, and is thus deeper than what was experienced during the 2008/09 Great Trade Collapse; ii) Global value chains have amplified cross-country spillovers; iii) When supply chains are highly integrated, protectionist measures can disrupt production of medical equipment and supplies; and, hence, iv) Keeping international trade open during the crisis can help to limit the economic cost of the pandemic and foster global growth during the recovery.

Demand as well as supply channels

Baldwin and Tomiura (2020) point out that the pandemic is simultaneously a global supply and demand shock. Baldwin (2020) notes that the 2008/09 Great Trade Collapse was a ‘sudden, synchronised and broad’ decline in trade, but was mostly a result of a global demand shock. This time there are additional effects rippling through supply chains, as suppliers have difficulty sourcing inputs. The direct supply-side disruption in the current crisis is thus likely to lead to an even ‘Greater Trade Collapse’ in 2020.

But the story with services trade might be different, since the shock will encourage remote, tele-intermediated interactions – the ‘heart and soul’ of many services. Baldwin and Tomiura (2020) thus speculate that Covid-19 could well end up increasing trade in services, despite the hit to tourism.

Global Value Chains as propagation mechanisms

A key difference of this pandemic with respect to earlier global crises is the increased interconnectedness of the global economy. Baldwin and Freeman (2020a) note that large Global Value Chain (GVC) hubs, which account for the lion’s share of world manufacturing output, have experienced broad-reaching effects of the pandemic on their ability to produce goods and services. Given these countries’ centrality in global trade networks, the resulting shutdown of production to curtail the virus will have disproportionate ripple-through effects to the world economy. Stressing the importance of both direct and indirect (ie via third-country) supply-chain linkages, they calculate that, for their sample of 21 major manufacturing countries, exposure to China is significant: manufacturing inputs from China range from 3.7% of manufacturing output (Netherlands) to 16.4% (Korea). Although countries are less reliant on the US than they are on China, inputs from the US are nonetheless important in as many nations as those from China.

Gerschel et al (2020) consider how the reduction in production in China may affect production in France. They calculate that a 10% drop in Chinese productivity could reduce French GDP by 0.3% through direct and indirect trade links only, mostly transmitted through a few large firms which heavily rely on foreign inputs. Bonadio et al (2020) perform a quantitative assessment of the role of GVCs in the pandemic. They find that one third of pandemic-related GDP contractions are attributed to the propagation of shocks through global supply networks. Nonetheless, they argue that regionalising supply chains is unlikely to insulate economies from similar events in the future given that the shock affects domestic and foreign nations similarly.

Heise (2020) uses transaction-level data to document a sharp decline in US imports from China in February and March, which was somewhat reversed in April. Daily imports from China were roughly 50% lower in March 2020 compared to March 2019, with bigger falls for smaller importers relative to larger importers. Firms with pre-established supply-chain relationships with other countries proved more resilient to the shock, as they were able to shift sourcing to partially offset the reduction in imports from China.

Javorcik (2020) argues that the current crisis has revealed many countries’ over-reliance on China and therefore will force businesses to re-engineer their GVCs by diversifying supplier bases. Kilic and Marin (2020) argue that the pandemic will trigger a wave of re-shoring and greater use of robots to mitigate supply-chain risks, as companies reassess the benefits of sourcing intermediate goods from overseas.

Protectionism on the rise during the crisis

One of the major instruments that many governments have used to tackle the effects of the pandemic has been protectionist trade policy, especially applied to personal protective equipment (PPE) and vital medical supplies, which, according to a WTO report (2020), constituted 5% of total world merchandise trade in 2019. Their production is fairly concentrated: 35% of global medical product exports are from Germany, the US, and Switzerland, while 40% of world PPE exports are from China, Germany and the US.

Bown (2020) summarises the many export restrictions that some major economies have placed not only on PPE, but also on hospital equipment, pharmaceuticals, and food. He notes that these restrictions imperil many countries’ access to much needed products at a global scale, since taking supplies off the market can spark retaliations, lead to higher prices, and harm hospital workers in need in other countries.

Evenett (2020) notes that the rise in protectionism is a continuation of the trend seen since the global financial crisis, but is also common during major global downturns. He stresses the risk that the crisis-era policies become ‘a new dominant form of protectionism’ and cause pervasive trade distortions. He argues for active removal of trade barriers for key supplies to ensure they can reach where they are most needed.

The case for keeping international trade open

Contrary to the idea that GVCs increase the potential for supply disruptions, Miroudot (2020) argues that reliance on a single source for key inputs is far riskier, and complex supply networks are more resilient. He notes that increased trade openness during the crisis could allow countries to exploit economies of scale and boost total production of vital supplies. For example, Korea has already successfully shifted much of its diagnostics industry towards producing Covid test kits, which are now exported to over 100 countries. This, he argues, was made possible by the country’s active participation in international supply networks and skilled supply-chain managers who built on their experience to act quickly.

Stellinger et al (2020) put forward the argument that international trade has been beneficial to public health by allowing increased specialisation: one country on its own simply cannot manufacture all the medical equipment, provide the chemical inputs for medicines, and invent essential medicines. Furthermore, increased profits from access to global markets would encourage R&D spending and fuel innovation.

Bamber et al (2020) show that the fragmentation of production of medical supplies and devices over recent decades has increased – and not diminished – the ability of countries to respond to the sudden spikes in demand during the Covid crisis. In fact they point out that export controls threaten to reduce rather than increase local availability, especially if a country exports parts and components but imports finished medical supplies. They use the example of a ban on the export of ventilator hoses – used for ventilators but of no use on their own – reducing overseas production of ventilators, and hence reducing the supply of finished machines.

Fiorini et al (2020) note the importance of functioning supply chains and distribution networks for the production of these key supplies: while the largest producer of medical masks is China, they require filters from fabric mill companies from a diverse set of countries such as Taiwan, Germany and Canada. They also argue that export controls and requisitioning domestic suppliers made it harder for even domestic healthcare professionals to obtain vital equipment. Furthermore, they make the point that a country banning the export of one good may prompt reciprocal measures which could reduce the overall supply of that good, or other goods the country does not produce domestically.

Baldwin and Freeman (2020b) show the evolution of countries’ reliance on each other over time, and note the increased role of Chines inputs in all other nation’s output since the Great Trade Collapse. They conclude that if the world is to ramp up the production of essential medical equipment to meet the swift rise in pandemic-driven demand, there is no alternative to international trade given the integrated nature of global manufacturing. In particular, documenting high levels of interdependence among nations at the detailed sector level, they posit that a spiral of retaliation could disrupt world productive capacity in virtually all manufacturing sectors.

Final thoughts

Unlike past pandemics, Covid-19 has struck at a time when the world is highly interconnected. The global nature of the shock is likely to lead to a large decline in both exports and imports in a more severe fashion than the 2008/09 Great Trade Collapse, and there is already evidence that its effects are being propagated through GVCs. In response to the simultaneous supply and demand shocks, many governments are using trade policy as part of their toolkit to combat the pandemic, in particular by imposing export restrictions on medical supplies. The consensus view of the literature is that the imposition of such barriers can be particularly harmful during crises, and that keeping trade open, and the strengthening rather than dismantling of supply networks, could help limit the economic and human cost of the pandemic and foster recovery.

Rebecca Freeman works in the Bank’s Global Analysis Division and Rana Sajedi works in the Bank’s Research Hub Division.

If you want to get in touch, please email us at bankunderground@bankofengland.co.uk or leave a comment below.

Comments will only appear once approved by a moderator, and are only published where a full name is supplied. Bank Underground is a blog for Bank of England staff to share views that challenge – or support – prevailing policy orthodoxies. The views expressed here are those of the authors, and are not necessarily those of the Bank of England, or its policy committees.

Covid-19 Briefing: Corporate Balance Sheets

Published by Anonymous (not verified) on Tue, 09/06/2020 - 6:01pm in

Neeltje van Horen

Faced with unprecedented declines in corporate revenue, the Covid-19 shock represents a loss of cash flow of indeterminate duration for many firms. It is too early to tell how exactly firms will be affected by this crisis and how scarring it will be, but the crisis will likely have a significant impact on most corporates. This post reviews the literature on factors affecting firms’ ability to withstand the Covid-19 shock and what large corporates did to shore up their finances.

Corporate stock price reactions

The Covid-19 shock had a big impact on stock markets worldwide. Using a newspaper-based Equity Market Volatility (EMV) tracker Baker et al (2020) show that no previous infectious disease outbreak, including the Spanish Flu, has impacted the stock market as powerfully as the Covid-19 pandemic. However, there was substantial heterogeneity in stock market reactions which can give useful early insights, until such time as firm balance sheet data become available, to understand which type of firms are likely to be more at risk. Perhaps unsurprisingly, financial flexibility and the ease to shed costs when revenues decline were being rewarded by investors.

Ramelli and Wagner (2020) show that when the virus was still contained to China, investors tended to move out of internationally-oriented US firms, especially those with China exposure. As the virus spread to Europe and the US, corporate debt and cash holdings emerged as the most important value drivers. This remained true even after the Fed announced a series of measures to support the economy on 23 March.

Fahlenbrach, Rageth and Stulz (2020) find that US non-financial firms with lower cash holdings, higher short-term debt, and higher long-term debt experienced worse declines in stock returns and benefited more from the news about the Fed’s policy response. Interestingly, this paper also finds that the ex-ante ability of firms to access financial markets (based on popular measures of financial constraints) did not explain stock market returns. In other words, it is not the ability to access financial markets, but the actual need to access them that appears to matter. These findings are confirmed by Ding, Levine, Lie and Xie (2020) in a multi-country setting. In addition, this paper shows that the drop in stock market prices was milder among firms less dependent on global supply chains, with more corporate social responsibility activities and with less entrenched executives.

Alfaro, Chari, Greenland and Schott (2020) also study stock market response but take a different approach. They use simple epidemiological models of infectious disease to isolate unanticipated changes in the trajectory of the disease. They show that an unanticipated increase (decrease) in predicted infections forecasts negative (positive) next-day stock returns. These findings are consistent with investors using such models to update their beliefs about the economic consequences of the outbreak in real time, or these models being a good approximation of investors’ beliefs. Using the same variation in predicted infections the authors find that the Covid-19 related losses in market value are larger for more debt-laden, less profitable and more capital-intensive firms. The latter result suggests that investors value the relative ease with which labour versus capital costs can be shed.

How do firms shore up on liquidity?

Acharya and Steffen (2020) also confirm the importance of liquidity for a firm’s market valuation, but in addition provide some first insights as to how large US corporates attempted to shore up their liquidity. Credit line usage accelerated rather early during the crisis period and became somewhat flat by the end of March. While most firms were raising cash through credit line draw-downs, riskier firms were drawing down much larger amounts.

This is line with earlier findings from the global financial crisis that firms that had enough internal funds choose not to use their credit lines (Campello, Giambona, Graham and Harvey, 2011). This is not surprising as access to credit lines becomes more restricted following declines in borrower profitability (Sufi, 2009) and banks tend to increase interest rates and make loan provisions less borrower-friendly when firms, faced with a cash flow shock, draw on or increase their credit lines (Brown, Gustafson and Ivanov, 2020).

Besides drawing down credit lines Acharya and Steffen (2020) show that firms also raised cash by accessing bond markets, but this started later. New bond issuance was muted until mid-March but accelerated after the Fed announced its corporate bond purchase programs through which it can purchase investment-grade rated corporate bonds (including ‘fallen angels’) and ETFs. The surge in terms of volume was driven almost entirely by AAA-A-rated companies. However, the dollar volume of bond issued by BBB-rated firms also increased substantially after 23 March, with growth rates similar to that of AAA-A-rated companies. Not surprisingly, yields were substantially higher for all firms issuing new debt.

Cash is king

Cash is clearly the king of liquid assets during Covid-19. Existing credit lines can and do provide firms with additional resources to help them meet short-term liquidity needs. But they often have a short maturity, are more expensive and it is uncertain whether banks will renew them (Campello, Giambona, Graham and Harvey, 2011). Fortunately, corporate liquidity positions of firms in advanced economies appear stronger than they were at the onset of the global financial crisis. This is partly a consequence of a general increase in corporate cash holdings in many economies since the mid-2000s (Dao and Maggi, 2018), but also a reaction to the financial crisis itself as firms tend to shore up their liquidity after a financial or economic shock (Almeida, Campello and Weisbach, 2004; Berg, 2018).

However, a substantial number of firms have inadequate liquidity buffers. Banerjee, Illes, Kharroubi and Garralda (2020) estimate that 25% of the firms in advanced and emerging economies do not hold enough cash to cover their debt obligations. The Bank of England (2020) estimates that before the Covid-19 shock, only one third of UK companies held liquidity buffers that were larger than three months’ worth of their turnover.

While it is still very uncertain how the Covid-19 induced crisis will unfold and to what extent it will resemble the global financial crisis, some lessons from the global financial crisis might be good to keep in mind. Joseph, Kneer, Van Horen and Saleheen (2019) show that during the global financial crisis companies with limited cash holdings had to reduce their investment and therefore lost productive capacity. When demand returned and market conditions improved they were not able to catch up with their cash-rich rivals and as a result lost market share to them. Companies with large amounts of cash on their balance sheets at the onset of the coronavirus crisis might therefore emerge as winners in the post-Covid world.

Cash might be king, but leverage is critical as well. While UK corporate debt servicing has been improving in recent years, total debt owed by UK corporates has grown steadily in recent years (Bank of England, 2020). Papers studying the global financial crisis provide ample evidence that firms that needed to roll-over large amounts of debt during the crisis experienced larger employment losses (Giroud and Mueller, 2017), invested substantially less (Kalemli-Ozcan, Laeven and Moreno, 2020) and experienced a persistent decline in total factor productivity (Duval, Hee Hong and Timmer, 2019). Therefore it is not surprising that stock market reactions were more muted for firms with lower levels of debt.

Concluding remarks

Firms that are cash rich, have low leverage, and have a flexible cost base are more likely to be resilient to the Covid-19 shock and might be able to improve their competitive positions when the recovery sets in. However, it is crucial that temporary liquidity problems of otherwise viable firms do not turn into solvency problems as this will lead to longer-term economic damage. This highlights the importance of government schemes and continued lending by banks to ease funding shortages of firms.

Neeltje van Horen works in the Bank’s Research Hub Division.

If you want to get in touch, please email us at bankunderground@bankofengland.co.uk or leave a comment below.

Comments will only appear once approved by a moderator, and are only published where a full name is supplied. Bank Underground is a blog for Bank of England staff to share views that challenge – or support – prevailing policy orthodoxies. The views expressed here are those of the authors, and are not necessarily those of the Bank of England, or its policy committees.

How important are large firms for aggregate productivity growth in the UK?

Published by Anonymous (not verified) on Thu, 28/05/2020 - 6:00pm in

Marko Melolinna

Aggregate labour productivity growth has been low in the UK following the global financial crisis in 2008 (Chart 1). The average annual growth rate has been only 0.7% over the period 2008 to 2019, which is around a third of the growth rate seen during the decade preceding the crisis. There are many ways of analysing the reasons for this weakness, but in this blog post, I concentrate on examining the role that the largest firms in the UK have played in the story. Our analysis covering the past three decades from 1990 to 2017 suggests that firm-specific, or idiosyncratic, shocks to the 100 largest firms had a significant effect on aggregate productivity dynamics in the UK.

Chart 1: UK productivity and pre-crisis trend

Notes: Productivity is defined as UK GDP divided by number of employees in the UK economy. 1997Q1=100.

Source: ONS and author’s calculations.

Various explanations of the weak productivity dynamics in the UK in the aftermath of the financial crisis have been proposed in the literature. A wide range of approaches have been used, increasingly relying not just on aggregate, but also on firm- and industry-level data. A crucial question in the analysis has been the extent to which the productivity slowdown has been of a short-term cyclical or more persistently structural nature. Evidence suggests that during the initial phases of the recession, firms in the UK acted flexibly by holding on to labour and lowering factor utilisation in response to weak demand conditions, with negative cyclical effects on productivity. But since then, more structural factors, like reduced investment in both physical and intangible capital and impaired resource allocation from low to high productive uses, are likely to have played a more persistent role in the weakness (see eg Barnett et al (2014) and Bank of England Quarterly Bulletin (2014Q2)). Purely empirical characteristics of the productivity data also suggest an important role for structural factors (see Melolinna and Tóth (2019)). More recently, uncertainties related to the EU referendum have had a negative effect on productivity growth (see Bloom et al (2019)).

In terms of industry-level data, Tenreyro (2018) offered evidence pointing to the importance of very few sectors in driving the weakness in productivity growth, with finance and manufacturing being the largest contributors to the slowdown. In a recent example of analysis with the universe of UK firm-level data, Schneider (2018) argued that the slowdown has been driven by shortfalls in the top quartile of the firms’ productivity distribution.

In a recent Working Paper, my co-author and I looked at UK productivity dynamics from a slightly different angle compared to previous studies. Our interest was not on the drivers of the weakness, nor on the industry mix or universe of all firms. Instead, our focus was on establishing whether idiosyncratic shocks to the largest UK firms, rather than common shocks, have had a significant effect on the dynamics of aggregate productivity in the UK over the past three decades. To do this, we introduced a unique way of separating firm-specific idiosyncratic shocks from common shocks.

Traditionally, macroeconomists are not very interested in the economic activity of a small number of firms (or households), since their effect on the aggregate should be negligible. However, an important paper by Gabaix (2011) showed that under certain conditions, a small number of firms can have significant aggregate effects. In particular, he showed that if the distribution of firm size is sufficiently skewed and certain other statistical properties of the firm population are met, then a small number of large firms may account for a disproportionally large fraction of aggregate fluctuations. To measure this, the author introduced the concept of a ‘granular residual’ (GR), which is a composite of idiosyncratic, firm-level shocks to a selected number of the largest firms in the economy. Gabaix (2011)  finds that the GR is a significant driver of aggregate GDP and productivity dynamics in the US.

In our analysis, we first showed that the population of firms in the UK is indeed sufficiently skewed for large firms to potentially have significant aggregate effects. We then took Gabaix (2011) (and several papers that have built on it) as a starting point for calculating a firm-level GR. As Gabaix (2011) shows, in its most general form, the GR can be computed with a simple formula that adds up the idiosyncratic firm-level technology shocks for the 100 largest UK firms each year. These shocks are weighted by the size of each firm relative to GDP, so the larger the firm, the larger the contribution of its technology shock to the GR measure.

We also make an effort to estimate the idiosyncratic technology shocks somewhat differently from previous literature. In our approach, we control for the unobserved, firm-specific differences in technology that do not vary over time, and — unlike Gabaix (2011) — we also allow for the possible existence of common shocks (which can be correlated over time) affecting many firms. We do this by estimating a firm-level production function, where firm production depends on its fixed (capital) and variable (labour) inputs, as well as the efficiency with which these inputs are combined to produce output. We use an established procedure introduced by De Loecker and Warzynksi (2012) to estimate firm-level weights on the fixed and variable inputs, which allows us to back out the firm-specific technology shocks. We then go on to use these firm-specific shocks in simple regressions explaining aggregate UK productivity dynamics over time.

We took our firm-level production function model to financial accounts data on the 100 largest UK-based firms each year (excluding finance and oil sectors). The resulting GR contribution to productivity growth is depicted in Chart 2, together with aggregate UK productivity growth. The GR contribution in the chart shows how much the sum of the idiosyncratic firm-level technology shocks of the 100 largest UK firms contributes to aggregate productivity growth each year, as described above. So for example in 2005, the GR contributed around 0.4pp to aggregate productivity growth of around 2%.

By properly accounting for firm-level technology shocks in this way and regressing aggregate productivity growth on the GR contribution, we find that the shocks to the largest 100 UK firms can explain around 30% of aggregate productivity dynamics in the UK, on average, over the past three decades. Our analysis also suggests that it is important to allow for common shocks as well as firm-specific idiosyncratic shocks in the model. In the paper, we show that simplifications of our approach, which omit controlling for firm-level idiosyncratic shocks or do not account for common shocks, do not work well, highlighting the importance of identifying firm-specific shocks correctly.

Chart 2: UK productivity growth and ‘Granular residual’ of large firms

Notes: Productivity is defined as UK GDP divided by number of employees in the UK economy. Granular residual refers to the contribution of the largest 100 firms to aggregate productivity growth, as defined in the text.

Source: ONS and author’s calculations.

The main takeaway, based on our empirical application to UK productivity data, is that technology shocks to large firms can be important drivers of aggregate productivity dynamics. So while commentators have not generally looked at idiosyncratic shocks when explaining the UK productivity puzzle, the evidence in my post suggests that they should. However, as Chart 2 also shows, the correlation between the GR and aggregate productivity growth is far from perfect, especially since the financial crisis. So there are plenty of other factors taking place outside the largest firms that can account for the majority of productivity dynamics. The on-going challenge for policymakers is to figure out what factors, common and idiosyncratic, drive these dynamics, both at the aggregate as well as the firm level.

Marko Melolinna works in the Structural Economics Division.

If you want to get in touch, please email us at bankunderground@bankofengland.co.uk or leave a comment below.

Comments will only appear once approved by a moderator, and are only published where a full name is supplied. Bank Underground is a blog for Bank of England staff to share views that challenge – or support – prevailing policy orthodoxies. The views expressed here are those of the authors, and are not necessarily those of the Bank of England, or its policy committees.

Monetary Policy Transmission: Borrowing Constraints Matter!

Published by Anonymous (not verified) on Tue, 26/05/2020 - 6:00pm in

Fergus Cumming and Paul Hubert

How does the transmission of monetary policy depend on the distribution of debt in the economy? In this blog post we argue that interest rate changes are most powerful when a large share of households are financially constrained. That is, when a higher proportion of all borrowers are close to their borrowing limits. Our findings also suggest that the overall impact of monetary policy partly depends on the behaviour of house prices, and might not be symmetric for interest rate rises and falls.

From Micro to Macro

In a recent paper we use the universe of UK loan-level mortgage data to construct an accurate measure of the proportion of households that are close to the limits of what banks will lend them. Our rich mortgage data allow us a rare glimpse into the various factors that drove individual debt decisions between 2005 and 2017. After stripping out the effects of regulation and other developments in the macroeconomy, we then estimate the proportion of highly indebted households to construct a measure that is comparable across time (Figure 1). In doing so, we collapse the information contained within 11 million mortgages into a single time series, which allows us to explore macro questions in a tractable way.

Figure 1 – The share of conditional-LTI> 4 and LTI>5 mortgages

We use this state variable as a proxy for the share of people that are financially constrained, and use its variation over time to explore how it affects the economy’s responsiveness to monetary policy. In particular, we focus on the response of aggregate consumption when we interact it with a set of standard monetary shocks using local projection methods. This involves running regressions of consumption on monetary shocks at different horizons. We interact them with a variable that captures the proportion of highly indebted households, to see if the strength of response varies with the proportion of indebted highly households, as well as other controls.

These yield a set of impulse responses that tell us how the path of consumption might respond to an unexpected increase in the monetary policy rate. We can then take two different values of our state variable (1 standard deviation above and below the mean) to compare the path of consumption following a change in monetary policy when the share of highly indebted households is above, or below, its historic average. Intuitively, we know that a contractionary monetary shock leads to a near-term dip in consumption, which is why monetary policymakers tend to increase interest rates when they want to apply the brakes. We explore whether this pattern changes according to what else is going on, including whether more-people-than-normal are financially stretched.

State-Contingent Monetary Policy

We find that monetary policy is more powerful when a large proportion of households have taken on relatively high debt burdens. In that sense, the transmission of monetary policy depends on the state of the economy. In Figure 2 below, we look at the response of non-durable, durable and total consumption in response to a 100bp increase in interest rates. The grey (blue) swathes plot the path of consumption when there is a large (small) share of people who might be more constrained by higher debt holdings. The gaps between the blue and grey swathes suggest monetary policy is more potent when more people have higher debt levels.

Figure 2 – The response of consumption to 100bp contractionary monetary policy shock

This differential effect likely works through at least two mechanisms: first, when households increase their borrowing relative to their income, the mechanical effect of monetary policy on disposable income is amplified. Those with a large borrowings suffer the most from a proportional increase in monthly repayments! Second, households close to their borrowing limits adjust spending more in response to income changes (they have a higher so-called marginal propensity to consume). If there are more of these people, overall consumption will respond more to monetary policy. Interestingly, we find that our results are driven more by the distribution of highly-indebted households rather than a general rise in borrowing.

We also find evidence of some asymmetry in the transmission of monetary policy. When the share of constrained households is large, interest rate rises have a larger absolute impact than interest rate cuts. To some extent this is unsurprising. When your income is very close to your outgoings, the effect of a small income squeeze is very different to receiving a small windfall. So it follows that interest rate increases are likely to affect behaviour more than interest rate falls.

Our results also suggest that the behaviour of house prices affects how monetary policy feeds through. When house prices rise, homeowners feel wealthier and are more able to refinance their mortgages and release housing equity in order to spend money on other things. This can offset some of the dampening effects of an increase in interest rates. In contrast, when house prices fall, this channel means an increase in interest rates has a bigger contractionary effect on the economy, making monetary policy more potent.

Finally, we find that the household debt distribution also affects not just consumption responses, but also how monetary policy feeds through to the economy more broadly.  Running the same exercise to gauge the effects on wages, employment, real output growth and industrial production, we find a similar result: responses are stronger when more households are highly indebted.  This is likely driven by the direct effect of consumption, but also an amplification from spending through to changes in firm and labour market behaviour – or what economists call general-equilibrium effects.

Policy Implications

Our results suggest that the potency of monetary policy is not always the same. It can vary with respect to economic conditions. In particular, we show that the distribution of household indebtedness might explain some of the variation in the effects of monetary policy. It’s not so much the average level of indebtedness that matters, but rather its distribution. The greater the proportion of indebted households, the bigger the effect it has.

Economists, central bankers and other policymakers often look at the role of debt from a financial stability or regulatory perspective — how it affects the stability of individual institutions and the financial system as a whole. Our results suggest another reason why debt matters — it affects how monetary policy changes are transmitted through the economy. This is an emerging area of research, facilitated by new detailed micro-level datasets, and there’s much more for the profession to do on this topic.

Fergus Cumming works in the Monetary Policy Outlook Division and Paul Hubert works at Sciences Po – OFCE.

If you want to get in touch, please email us at bankunderground@bankofengland.co.uk or leave a comment below.

Comments will only appear once approved by a moderator, and are only published where a full name is supplied. Bank Underground is a blog for Bank of England staff to share views that challenge – or support – prevailing policy orthodoxies. The views expressed here are those of the authors, and are not necessarily those of the Bank of England, or its policy committees.

An Austrian Tragedy

Published by Anonymous (not verified) on Mon, 25/05/2020 - 11:45am in

It was hardly predictable that the New York Review of Books would take notice of Marginal Revolutionaries by Janek Wasserman, marking the susquicentenial of the publication of Carl Menger’s Grundsätze (Principles of Economics) which, along with Jevons’s Principles of Political Economy and Walras’s Elements of Pure Economics ushered in the marginal revolution upon which all of modern economics, for better or for worse, is based. The differences among the three founding fathers of modern economic theory were not insubstantial, and the Jevonian version was largely superseded by the work of his younger contemporary Alfred Marshall, so that modern neoclassical economics is built on the work of only one of the original founders, Leon Walras, Jevons’s work having left little impression on the future course of economics.

Menger’s work, however, though largely, but not totally, eclipsed by that of Marshall and Walras, did leave a more enduring imprint and a more complicated legacy than Jevons’s — not only for economics, but for political theory and philosophy, more generally. Judging from Edward Chancellor’s largely favorable review of Wasserman’s volume, one might even hope that a start might be made in reassessing that legacy, a process that could provide an opportunity for mutually beneficial interaction between long-estranged schools of thought — one dominant and one marginal — that are struggling to overcome various conceptual, analytical and philosophical problems for which no obvious solutions seem available.

In view of the failure of modern economists to anticipate the Great Recession of 2008, the worst financial shock since the 1930s, it was perhaps inevitable that the Austrian School, a once favored branch of economics that had made a specialty of booms and busts, would enjoy a revival of public interest.

The theme of Austrians as outsiders runs through Janek Wasserman’s The Marginal Revolutionaries: How Austrian Economists Fought the War of Ideas, a general history of the Austrian School from its beginnings to the present day. The title refers both to the later marginalization of the Austrian economists and to the original insight of its founding father, Carl Menger, who introduced the notion of marginal utility—namely, that economic value does not derive from the cost of inputs such as raw material or labor, as David Ricardo and later Karl Marx suggested, but from the utility an individual derives from consuming an additional amount of any good or service. Water, for instance, may be indispensable to humans, but when it is abundant, the marginal value of an extra glass of the stuff is close to zero. Diamonds are less useful than water, but a great deal rarer, and hence command a high market price. If diamonds were as common as dewdrops, however, they would be worthless.

Menger was not the first economist to ponder . . . the “paradox of value” (why useless things are worth more than essentials)—the Italian Ferdinando Galiani had gotten there more than a century earlier. His central idea of marginal utility was simultaneously developed in England by W. S. Jevons and on the Continent by Léon Walras. Menger’s originality lay in applying his theory to the entire production process, showing how the value of capital goods like factory equipment derived from the marginal value of the goods they produced. As a result, Austrian economics developed a keen interest in the allocation of capital. Furthermore, Menger and his disciples emphasized that value was inherently subjective, since it depends on what consumers are willing to pay for something; this imbued the Austrian school from the outset with a fiercely individualistic and anti-statist aspect.

Menger’s unique contribution is indeed worthy of special emphasis. He was more explicit than Jevons or Walras, and certainly more than Marshall, in explaining that the value of factors of production is derived entirely from the value of the incremental output that could be attributed (or imputed) to their services. This insight implies that cost is not an independent determinant of value, as Marshall, despite accepting the principle of marginal utility, continued to insist – famously referring to demand and supply as the two blades of the analytical scissors that determine value. The cost of production therefore turns out to be nothing but the value the output foregone when factors are used to produce one output instead of the next most highly valued alternative. Cost therefore does not determine, but is determined by, equilibrium price, which means that, in practice, costs are always subjective and conjectural. (I have made this point in an earlier post in a different context.) I will have more to say below about the importance of Menger’s specific contribution and its lasting imprint on the Austrian school.

Menger’s Principles of Economics, published in 1871, established the study of economics in Vienna—before then, no economic journals were published in Austria, and courses in economics were taught in law schools. . . .

The Austrian School was also bound together through family and social ties: [his two leading disciples, [Eugen von] Böhm-Bawerk and Friedrich von Wieser [were brothers-in-law]. [Wieser was] a close friend of the statistician Franz von Juraschek, Friedrich Hayek’s maternal grandfather. Young Austrian economists bonded on Alpine excursions and met in Böhm-Bawerk’s famous seminars (also attended by the Bolshevik Nikolai Bukharin and the German Marxist Rudolf Hilferding). Ludwig von Mises continued this tradition, holding private seminars in Vienna in the 1920s and later in New York. As Wasserman notes, the Austrian School was “a social network first and last.”

After World War I, the Habsburg Empire was dismantled by the victorious Allies. The Austrian bureaucracy shrank, and university placements became scarce. Menger, the last surviving member of the first generation of Austrian economists, died in 1921. The economic school he founded, with its emphasis on individualism and free markets, might have disappeared under the socialism of “Red Vienna.” Instead, a new generation of brilliant young economists emerged: Schumpeter, Hayek, and Mises—all of whom published best-selling works in English and remain familiar names today—along with a number of less well known but influential economists, including Oskar Morgenstern, Fritz Machlup, Alexander Gerschenkron, and Gottfried Haberler.

Two factual corrections are in order. Menger outlived Böhm-Bawerk, but not his other chief disciple von Wieser, who died in 1926, not long after supervising Hayek’s doctoral dissertation, later published in 1927, and, in 1933, translated into English and published as Monetary Theory and the Trade Cycle. Moreover, a 16-year gap separated Mises and Schumpeter, who were exact contemporaries, from Hayek (born in 1899) who was a few years older than Gerschenkron, Haberler, Machlup and Morgenstern.

All the surviving members or associates of the Austrian school wound up either in the US or Britain after World War II, and Hayek, who had taken a position in London in 1931, moved to the US in 1950, taking a position in the Committee on Social Thought at the University of Chicago after having been refused a position in the economics department. Through the intervention of wealthy sponsors, Mises obtained an academic appointment of sorts at the NYU economics department, where he succeeded in training two noteworthy disciples who wrote dissertations under his tutelage, Murray Rothbard and Israel Kirzner. (Kirzner wrote his dissertation under Mises at NYU, but Rothbard did his graduate work at Colulmbia.) Schumpeter, Haberler and Gerschenkron eventually took positions at Harvard, while Machlup (with some stops along the way) and Morgenstern made their way to Princeton. However, Hayek’s interests shifted from pure economic theory to deep philosophical questions. While Machlup and Haberler continued to work on economic theory, the Austrian influence on their work after World War II was barely recognizable. Morgenstern and Schumpeter made major contributions to economics, but did not hide their alienation from the doctrines of the Austrian School.

So there was little reason to expect that the Austrian School would survive its dispersal when the Nazis marched unopposed into Vienna in 1938. That it did survive is in no small measure due to its ideological usefulness to anti-socialist supporters who provided financial support to Hayek, enabling his appointment to the Committee on Social Thought at the University of Chicago, and Mises’s appointment at NYU, and other forms of research support to Hayek, Mises and other like-minded scholars, as well as funding the Mont Pelerin Society, an early venture in globalist networking, started by Hayek in 1947. Such support does not discredit the research to which it gave rise. That the survival of the Austrian School would probably not have been possible without the support of wealthy benefactors who anticipated that the Austrians would advance their political and economic interests does not invalidate the research thereby enabled. (In the interest of transparency, I acknowledge that I received support from such sources for two books that I wrote.)

Because Austrian School survivors other than Mises and Hayek either adapted themselves to mainstream thinking without renouncing their earlier beliefs (Haberler and Machlup) or took an entirely different direction (Morgenstern), and because the economic mainstream shifted in two directions that were most uncongenial to the Austrians: Walrasian general-equilibrium theory and Keynesian macroeconomics, the Austrian remnant, initially centered on Mises at NYU, adopted a sharply adversarial attitude toward mainstream economic doctrines.

Despite its minute numbers, the lonely remnant became a house divided against itself, Mises’s two outstanding NYU disciples, Murray Rothbard and Israel Kirzner, holding radically different conceptions of how to carry on the Austrian tradition. An extroverted radical activist, Rothbard was not content just to lead a school of economic thought, he aspired to become the leader of a fantastical anarchistic revolutionary movement to replace all established governments under a reign of private-enterprise anarcho-capitalism. Rothbard’s political radicalism, which, despite his Jewish ancestry, even included dabbling in Holocaust denialism, so alienated his mentor, that Mises terminated all contact with Rothbard for many years before his death. Kirzner, self-effacing, personally conservative, with no political or personal agenda other than the advancement of his own and his students’ scholarship, published hundreds of articles and several books filling 10 thick volumes of his collected works published by the Liberty Fund, while establishing a robust Austrian program at NYU, training many excellent scholars who found positions in respected academic and research institutions. Similar Austrian programs, established under the guidance of Kirzner’s students, were started at other institutions, most notably at George Mason University.

One of the founders of the Cato Institute, which for nearly half a century has been the leading avowedly libertarian think tank in the US, Rothbard was eventually ousted by Cato, and proceeded to set up a rival think tank, the Ludwig von Mises Institute, at Auburn University, which has turned into a focal point for extreme libertarians and white nationalists to congregate, get acquainted, and strategize together.

Isolation and marginalization tend to cause a subspecies either to degenerate toward extinction, to somehow blend in with the members of the larger species, thereby losing its distinctive characteristics, or to accentuate its unique traits, enabling it to find some niche within which to survive as a distinct sub-species. Insofar as they have engaged in economic analysis rather than in various forms of political agitation and propaganda, the Rothbardian Austrians have focused on anarcho-capitalist theory and the uniquely perverse evils of fractional-reserve banking.

Rejecting the political extremism of the Rothbardians, Kirznerian Austrians differentiate themselves by analyzing what they call market processes and emphasizing the limitations on the knowledge and information possessed by actual decision-makers. They attribute this misplaced focus on equilibrium to the extravagantly unrealistic and patently false assumptions of mainstream models on the knowledge possessed by economic agents, which effectively make equilibrium the inevitable — and trivial — conclusion entailed by those extreme assumptions. In their view, the focus of mainstream models on equilibrium states with unrealistic assumptions results from a preoccupation with mathematical formalism in which mathematical tractability rather than sound economics dictates the choice of modeling assumptions.

Skepticism of the extreme assumptions about the informational endowments of agents covers a range of now routine assumptions in mainstream models, e.g., the ability of agents to form precise mathematical estimates of the probability distributions of future states of the world, implying that agents never confront decisions about which they are genuinely uncertain. Austrians also object to the routine assumption that all the information needed to determine the solution of a model is the common knowledge of the agents in the model, so that an existing equilibrium cannot be disrupted unless new information randomly and unpredictably arrives. Each agent in the model having been endowed with the capacity of a semi-omniscient central planner, solving the model for its equilibrium state becomes a trivial exercise in which the optimal choices of a single agent are taken as representative of the choices made by all of the model’s other, semi-omnicient, agents.

Although shreds of subjectivism — i.e., agents make choices based own preference orderings — are shared by all neoclassical economists, Austrian criticisms of mainstream neoclassical models are aimed at what Austrians consider to be their insufficient subjectivism. It is this fierce commitment to a robust conception of subjectivism, in which an equilibrium state of shared expectations by economic agents must be explained, not just assumed, that Chancellor properly identifies as a distinguishing feature of the Austrian School.

Menger’s original idea of marginal utility was posited on the subjective preferences of consumers. This subjectivist position was retained by subsequent generations of the school. It inspired a tradition of radical individualism, which in time made the Austrians the favorite economists of American libertarians. Subjectivism was at the heart of the Austrians’ polemical rejection of Marxism. Not only did they dismiss Marx’s labor theory of value, they argued that socialism couldn’t possibly work since it would lack the means to allocate resources efficiently.

The problem with central planning, according to Hayek, is that so much of the knowledge that people act upon is specific knowledge that individuals acquire in the course of their daily activities and life experience, knowledge that is often difficult to articulate – mere intuition and guesswork, yet more reliable than not when acted upon by people whose livelihoods depend on being able to do the right thing at the right time – much less communicate to a central planner.

Chancellor attributes Austrian mistrust of statistical aggregates or indices, like GDP and price levels, to Austrian subjectivism, which regards such magnitudes as abstractions irrelevant to the decisions of private decision-makers, except perhaps in forming expectations about the actions of government policy makers. (Of course, this exception potentially provides full subjectivist license and legitimacy for macroeconomic theorizing despite Austrian misgivings.) Observed statistical correlations between aggregate variables identified by macroeconomists are dismissed as irrelevant unless grounded in, and implied by, the purposeful choices of economic agents.

But such scruples about the use of macroeconomic aggregates and inferring causal relationships from observed correlations are hardly unique to the Austrian school. One of the most important contributions of the 20th century to the methodology of economics was an article by T. C. Koopmans, “Measurement Without Theory,” which argued that measured correlations between macroeconomic variables provide a reliable basis for business-cycle research and policy advice only if the correlations can be explained in terms of deeper theoretical or structural relationships. The Nobel Prize Committee, in awarding the 1975 Prize to Koopmans, specifically mentioned this paper in describing Koopmans’s contributions. Austrians may be more fastidious than their mainstream counterparts in rejecting macroeconomic relationships not based on microeconomic principles, but they aren’t the only ones mistrustful of mere correlations.

Chancellor cites mistrust about the use of statistical aggregates and price indices as a factor in Hayek’s disastrous policy advice warning against anti-deflationary or reflationary measures during the Great Depression.

Their distrust of price indexes brought Austrian economists into conflict with mainstream economic opinion during the 1920s. At the time, there was a general consensus among leading economists, ranging from Irving Fisher at Yale to Keynes at Cambridge, that monetary policy should aim at delivering a stable price level, and in particular seek to prevent any decline in prices (deflation). Hayek, who earlier in the decade had spent time at New York University studying monetary policy and in 1927 became the first director of the Austrian Institute for Business Cycle Research, argued that the policy of price stabilization was misguided. It was only natural, Hayek wrote, that improvements in productivity should lead to lower prices and that any resistance to this movement (sometimes described as “good deflation”) would have damaging economic consequences.

The argument that deflation stemming from economic expansion and increasing productivity is normal and desirable isn’t what led Hayek and the Austrians astray in the Great Depression; it was their failure to realize the deflation that triggered the Great Depression was a monetary phenomenon caused by a malfunctioning international gold standard. Moreover, Hayek’s own business-cycle theory explicitly stated that a neutral (stable) monetary policy ought to aim at keeping the flow of total spending and income constant in nominal terms while his policy advice of welcoming deflation meant a rapidly falling rate of total spending. Hayek’s policy advice was an inexcusable error of judgment, which, to his credit, he did acknowledge after the fact, though many, perhaps most, Austrians have refused to follow him even that far.

Considered from the vantage point of almost a century, the collapse of the Austrian School seems to have been inevitable. Hayek’s long-shot bid to establish his business-cycle theory as the dominant explanation of the Great Depression was doomed from the start by the inadequacies of the very specific version of his basic model and his disregard of the obvious implication of that model: prevent total spending from contracting. The promising young students and colleagues who had briefly gathered round him upon his arrival in England, mostly attached themselves to other mentors, leaving Hayek with only one or two immediate disciples to carry on his research program. The collapse of his research program, which he himself abandoned after completing his final work in economic theory, marked a research hiatus of almost a quarter century, with the notable exception of publications by his student, Ludwig Lachmann who, having decamped in far-away South Africa, labored in relative obscurity for most of his career.

The early clash between Keynes and Hayek, so important in the eyes of Chancellor and others, is actually overrated. Chancellor, quoting Lachmann and Nicholas Wapshott, describes it as a clash of two irreconcilable views of the economic world, and the clash that defined modern economics. In later years, Lachmann actually sought to effect a kind of reconciliation between their views. It was not a conflict of visions that undid Hayek in 1931-32, it was his misapplication of a narrowly constructed model to a problem for which it was irrelevant.

Although the marginalization of the Austrian School, after its misguided policy advice in the Great Depression and its dispersal during and after World War II, is hardly surprising, the unwillingness of mainstream economists to sort out what was useful and relevant in the teachings of the Austrian School from what is not was unfortunate not only for the Austrians. Modern economics was itself impoverished by its disregard for the complexity and interconnectedness of economic phenomena. It’s precisely the Austrian attentiveness to the complexity of economic activity — the necessity for complementary goods and factors of production to be deployed over time to satisfy individual wants – that is missing from standard economic models.

That Austrian attentiveness, pioneered by Menger himself, to the complementarity of inputs applied over the course of time undoubtedly informed Hayek’s seminal contribution to economic thought: his articulation of the idea of intertemporal equilibrium that comprehends the interdependence of the plans of independent agents and the need for them to all fit together over the course of time for equilibrium to obtain. Hayek’s articulation represented a conceptual advance over earlier versions of equilibrium analysis stemming from Walras and Pareto, and even from Irving Fisher who did pay explicit attention to intertemporal equilibrium. But in Fisher’s articulation, intertemporal consistency was described in terms of aggregate production and income, leaving unexplained the mechanisms whereby the individual plans to produce and consume particular goods over time are reconciled. Hayek’s granular exposition enabled him to attend to, and articulate, necessary but previously unspecified relationships between the current prices and expected future prices.

Moreover, neither mainstream nor Austrian economists have ever explained how prices are adjust in non-equilibrium settings. The focus of mainstream analysis has always been the determination of equilibrium prices, with the implicit understanding that “market forces” move the price toward its equilibrium value. The explanatory gap has been filled by the mainstream New Classical School which simply posits the existence of an equilibrium price vector, and, to replace an empirically untenable tâtonnement process for determining prices, posits an equally untenable rational-expectations postulate to assert that market economies typically perform as if they are in, or near the neighborhood of, equilibrium, so that apparent fluctuations in real output are viewed as optimal adjustments to unexplained random productivity shocks.

Alternatively, in New Keynesian mainstream versions, constraints on price changes prevent immediate adjustments to rationally expected equilibrium prices, leading instead to persistent reductions in output and employment following demand or supply shocks. (I note parenthetically that the assumption of rational expectations is not, as often suggested, an assumption distinct from market-clearing, because the rational expectation of all agents of a market-clearing price vector necessarily implies that the markets clear unless one posits a constraint, e.g., a binding price floor or ceiling, that prevents all mutually beneficial trades from being executed.)

Similarly, the Austrian school offers no explanation of how unconstrained price adjustments by market participants is a sufficient basis for a systemic tendency toward equilibrium. Without such an explanation, their belief that market economies have strong self-correcting properties is unfounded, because, as Hayek demonstrated in his 1937 paper, “Economics and Knowledge,” price adjustments in current markets don’t, by themselves, ensure a systemic tendency toward equilibrium values that coordinate the plans of independent economic agents unless agents’ expectations of future prices are sufficiently coincident. To take only one passage of many discussing the difficulty of explaining or accounting for a process that leads individuals toward a state of equilibrium, I offer the following as an example:

All that this condition amounts to, then, is that there must be some discernible regularity in the world which makes it possible to predict events correctly. But, while this is clearly not sufficient to prove that people will learn to foresee events correctly, the same is true to a hardly less degree even about constancy of data in an absolute sense. For any one individual, constancy of the data does in no way mean constancy of all the facts independent of himself, since, of course, only the tastes and not the actions of the other people can in this sense be assumed to be constant. As all those other people will change their decisions as they gain experience about the external facts and about other people’s actions, there is no reason why these processes of successive changes should ever come to an end. These difficulties are well known, and I mention them here only to remind you how little we actually know about the conditions under which an equilibrium will ever be reached.

In this theoretical muddle, Keynesian economics and the neoclassical synthesis were abandoned, because the key proposition of Keynesian economics was supposedly the tendency of a modern economy toward an equilibrium with involuntary unemployment while the neoclassical synthesis rejected that proposition, so that the supposed synthesis was no more than an agreement to disagree. That divided house could not stand. The inability of Keynesian economists such as Hicks, Modigliani, Samuelson and Patinkin to find a satisfactory (at least in terms of a preferred Walrasian general-equilibrium model) rationalization for Keynes’s conclusion that an economy would likely become stuck in an equilibrium with involuntary unemployment led to the breakdown of the neoclassical synthesis and the displacement of Keynesianism as the dominant macroeconomic paradigm.

But perhaps the way out of the muddle is to abandon the idea that a systemic tendency toward equilibrium is a property of an economic system, and, instead, to recognize that equilibrium is, as Hayek suggested, a contingent, not a necessary, property of a complex economy. Ludwig Lachmann, cited by Chancellor for his remark that the early theoretical clash between Hayek and Keynes was a conflict of visions, eventually realized that in an important sense both Hayek and Keynes shared a similar subjectivist conception of the crucial role of individual expectations of the future in explaining the stability or instability of market economies. And despite the efforts of New Classical economists to establish rational expectations as an axiomatic equilibrating property of market economies, that notion rests on nothing more than arbitrary methodological fiat.

Chancellor concludes by suggesting that Wasserman’s characterization of the Austrians as marginalized is not entirely accurate inasmuch as “the Austrians’ view of the economy as a complex, evolving system continues to inspire new research.” Indeed, if economics is ever to find a way out of its current state of confusion, following Lachmann in his quest for a synthesis of sorts between Keynes and Hayek might just be a good place to start from.

Business cycles and the unemployment rate

Published by Anonymous (not verified) on Tue, 12/05/2020 - 6:54am in

The COVID19 crisis has unleashed an economic crisis that is unprecedented in its speed and in its depth, making these very interesting times to study macro-economics.

Lecture 9 of Economics for Everyone describes the anatomy of the business cycle, and relates these swings in macroeconomic activity to a statistic that, as much as any other, speaks directly to the lives of citizens, the unemployment rate.

So in this lecture we describe the anatomy of the business cycle, how macro-economists link changes in GDP from its potential to changes in the unemployment rate, and finally just exactly what is this statistic called the “unemployment rate” and how is it measured by statistical agencies.

Download the presentation as a pdf.

Pages