Error message

Deprecated function: The each() function is deprecated. This message will be suppressed on further calls in _menu_load_objects() (line 579 of /var/www/drupal-7.x/includes/

Covid-19 briefing: extensions to the SIR model

Published by Anonymous (not verified) on Mon, 30/11/2020 - 8:00pm in

John Lewis

The SIR model, first developed by Kermack and McKendrick (1927), remains the canonical epidemiological model. It a natural choice for economists seeking to model the interplay between economic and epidemiological dynamics. This briefing surveys some of the many adaptations to the basic SIR setup which have emerged in the epi-macro literature over the past six months. These have all been used to analyse issues such as lockdown policies, super-spreaders, herd immunity, hospital capacity and ‘test- and-trace’.

The canonical SIR model

The SIR model divides the population into three categories (or ‘compartments’ in epidemiology jargon): Susceptible, Infected and Recovered. It models individuals transitions between those states using a set of exogenously given transition rates which are related to the relative sizes of each group. And it assumes that the transition can only ever occur in ‘one direction’ (eg susceptible people may become infected, infected people may recover, but no other moves between groups are possible).

SIR based epi-macro models modify this basic framework by assuming these transitions between compartments are endogenous and related to individuals’ economic decisions (eg consumption or labour supply). It then bolts on a simple macro model to allow for feedback between the pandemic dynamics and the macroeconomy and vice versa.

Many epi-macro papers use this basic SIR model, either because they want to generate a simple benchmark (eg Eichenbaum Rebelo and Trabandt (2020)), and/or because the main modelling focus of the paper lies elsewhere. Over the past six months, other authors have added more epidemiological richness to epi-macro models by extending the SIR framework in various ways.

Uncertainty over infection status

Forsyth (2020) introduces a plausible informational friction: some infected people are asymptomatic, and some susceptible people display Covid-like symptoms though they are not infected. This setup allows for a comparison of policies targeted at those with symptoms against the alternative of a uniform lockdown. The former are less costly in terms of output, because fewer agents have to isolate, but result in some transmission via asymptomatic agents. Calibration on UK data in the paper suggests mitigation measures reduce fatalities by 39.1% over 18 months. Under uniform lockdown, the GDP hit in 2020 is estimated at 21.4%, but if policies are conditional on symptoms, GDP costs can be reduced without increasing fatalities.

Variation in ‘R’

Other papers have relaxed the assumption that the probability of a susceptible individual becoming infected is uniform across the whole population. This can be done by introducing variation in either the contact rates between people, or in the probability of transmission when two individuals meet.

Ellison (2020) uses epidemiological modelling advances of the 80s and 90s to relax the ‘uniform contact rates’ assumption in two ways. First, agents are split into sub-groups with different activity levels, implying different probabilities of encountering others. Second, agents are more likely to encounter those within their own group than outsiders. He shows that key population parameters, like herd immunity thresholds and composite R, are not merely averages of the underlying groups, but depend on the range and distribution of activity levels across the population. He shows that R can decline faster when meeting is more heterogeneous, and so models based on uniform meeting assumptions may overstate the initial impact of lockdown measures and understate how fast the virus would have spread absent any lockdown measures.

Holden and Thornton (2020) allow for variation in the probability of transmission across individual pairs, by making the pair-specific reproduction rate a random variable. This is a way of modelling ‘super-spreaders’ who are much more infectious than average. If by ‘bad luck’ more of the initially infected are super-spreaders, this raises the initial case count, and has long-lasting effects as cases compound from that higher base. The role of ‘luck’ is much greater early on when fewer people are infected, whereas later on the ‘law of averages’ tends to kick in and R converges to its population average. As a result, otherwise identical populations can exhibit very different paths simply due to chance. In this model, optimal policy depends not just on the average R, but crucially on the proportion of its distribution that lies above one. The authors show that the relative efficacy of measures to reduce the transmissibility of a virus (eg facemasks) vs. lowering contact rates (eg shelter-in-place orders) is affected by the distribution of R.

Waning immunity

In the canonical SIR model, immunity is permanent once acquired, because recovered agents are no longer susceptible. This assumption can be relaxed in the so-called ‘SIRS’ model, where recovered individuals can lose immunity and move back into the susceptible category. This effectively assumes that immunity can wane to some extent: consistent with the documented cases of Covid reinfection (eg Tillett et al (2020)).

The SIRS model of Çenesiz and Guimarães (2020) implies that the shorter-lived immunity is, the harder it is to achieve herd immunity, and the greater and longer the period of social distancing required. Waning immunity has relatively little role early on in pandemics, because policy prescriptions are similar regardless of assumptions about longevity of immunity. But over time the difference between the two models’ results grows larger, because if immunity is permanent, the stock of immune agents steadily cumulates. At longer horizons, therefore, even relatively small changes in lockdown tightness can have large implications.

They then also allow for richer determinants of immunity, include the possibility that previously infected agents are less likely to have severe forms of Covid, and consider optimal policy if a future vaccine is expected. In short, the faster immunity wanes, and the more distant a vaccine is, the greater the impact of waning immunity on optimal policy, because the pandemic becomes longer-lived.

Additional compartments

Many papers have extended the number of compartments the population is divided into, (well) beyond the three in the classic SIR model. And, by allowing for a richer set of transition equations, they have removed the requirement that individuals can only go through the groups in a set sequence. Favero (2020) develops a model with a compartment for hospitalised patients. This allows the model to make direct predictions about the numbers hospitalised and also allows for explicit consideration of the role of ICU capacity. When admissions exceed capacity, those additional patients are termed as constrained, and face a higher probability of death because they cannot be treated to the same standard as others. 

Favero (2020) shows the addition of this constraint can explain the much higher case fatality rate (CFR) in the Lombardy region of Italy. More broadly, he analyses the role of the capacity constraint, and the risk of ICU saturation, and shows that strategies that involve large numbers of simultaneous infections are associated with much higher death rates.

Giordano et al (2020) develop a model which adds five extra compartments. Some capture different possibilities for those who have the disease, in terms of having symptoms or not, and being detected or not. Other compartments are for agents who are acutely ill, and for two final states where agents either end up dead or fully healed.

This richer model allows for more parameters, including different fatality rates and differential transmission of symptomatic cases. It also helps explain misperceptions of the CFR and speed of spread. The authors conclude that lockdown measures can only be lifted when widespread testing and contact tracing is available, and that a combination of both tools is needed to reduce cases.

But more complexity may not always be preferable. Roda et al (2020) compare the SIR model with a SEIR model, which has an extra category capturing exposed agents, who have the disease but are not yet infectious. This creates two additional parameters, governing the latent period (how long before those who get the disease become infectious) and the initial share of the population in the exposed category. In practice, it is difficult to identify the parameters empirically, because the model cannot distinguish between a case with a long latent period and a low initial share, or a short latent period and a high initial share. Using Akaike information criteria, the authors show that the small increase in model fit for the SEIR model versus the SIR model does not compensate for the additional complexity introduced by these two extra parameters.

Concluding remarks

The original SIR model has stood the test of time remarkably well. In its simplest form it is able to capture key features of pandemics, and this makes it a natural choice in epi-macro for modellers seeking to bolt on an economic model while keeping their model simple. In addition, the compartmental structure allows for relatively easy incorporation of further complexity. These extensions yield important insights about the relative merits of different lockdown policies, the role of super-spreaders in determining the path of a pandemic, the additional difficulty of achieving herd immunity, the constraint posed by hospital capacity and the role of test and trace.

John Lewis works in the Bank’s Research Hub.

If you want to get in touch, please email us at or leave a comment below.

Comments will only appear once approved by a moderator, and are only published where a full name is supplied. Bank Underground is a blog for Bank of England staff to share views that challenge – or support – prevailing policy orthodoxies. The views expressed here are those of the authors, and are not necessarily those of the Bank of England, or its policy committees.

When bigger isn’t better: UK firms’ equity price performance during the Covid-19 pandemic

Published by Anonymous (not verified) on Thu, 26/11/2020 - 8:00pm in

Tommaso Aquilante, David Bholat, Andreas Joseph, Riccardo M Masolo, Tim Munday and David van Dijcke


Covid-19 (Covid) has had heterogeneous effects on different groups of people. For example, it’s had larger negative impacts on contact-intense occupations (Leibovici, Santacreu and Famiglietti (2020)), low wage earners (Joyce and Xu (2020)) and low-income households (Surico, Känzig and Hacioglu (2020) and Chetty et al (2020)). In this blog, we show that UK listed firms have been heterogeneously impacted too (compare Hassan et al (2020); Griffith, Levell and Stroud (2020)). Surprisingly, small firms’ stock prices have been more resilient on average. Or, to put it differently, being bigger hasn’t been better for firms during the pandemic. However, being big with a modern tilt towards intangibles turned out to be beneficial too.

A striking observation

In the UK, the average stock price of listed firms experienced a sharp fall in response to Covid followed by a slow but steady recovery between March and July 2020 (Figure 1 left-hand panel). However, looking at average stock prices masks important heterogeneity; larger firms saw weaker recoveries than small ones (Figure 1 right-hand panel). Here we measure firm size primarily using a firm’s product market share, i.e. the sales of a given firm as a proportion of total sales in its industry. But we observe the same patterns when using stock market capitalisation or the number of employees as proxies of firm size. However defined, larger firms saw weaker recoveries than their smaller counterparts.

We also find this inverse relationship between the strength of the post-Covid equity price recovery and firm size across the economy. Figure 2 shows a stronger recovery for smaller firms in all sectors except agriculture, fishing and forestry.

Figure 1: UK listed firm stock prices during 2020

This is surprising. Firms with larger market share might be imagined to have had revenue streams and profit margins more resilient to the Covid crisis than smaller competitors. They might also have greater market power vis-à-vis suppliers to renegotiate input costs. And larger firms might find it easier to access credit from financial institutions (Cenni et al (2015)). Historically, firms with larger market share have also outperformed those with smaller market share on a range of book and market-based measures (Edeling and Himme (2018)).

Is small beautiful, and is there something else too?

To shed light on what might be driving the pattern observed in Figures 1 and 2, we looked at a series of models to better understand the relationship between firm size and equity prices. In particular, we investigated the relationship between the firm-specific equity price recovery, defined as the increase from its equity price trough after the initial ‘Covid shock’ in March 2020, to the subsequent peak until end of June 2020, on a series of firm characteristics.

Figure 2: Sectoral product market share quartiles and stock market performance

We included measures of balance sheet leverage and liquidity; the historical comovement of a firm’s stock price with the market; foreign sales as a proportion of total sales (to control for the possible impact of foreign exchange price movements); and the effect of intangibles like software, patents, brands and human capital, as a share of total assets (intangible intensity). We also account for sectoral effects.

The results are summarised in Figure 3, which shows the coefficients of our regression model for explaining a firm’s equity price recovery after the Covid shock. Effects which can be robustly differentiated from zero, i.e. where the whiskers on the bars do not overlap with the zero line, are highlighted in red. Strikingly, the strength of the price recovery was still weaker among large firms, even after controlling for these variables, as can be seen from the left most bar which represents the effect of firm size. More specifically, we find that a one standard deviation in log product market share is associated with a 0.1 standard deviation decline in the log equity price recovery. This means, for example, that the recovery in the equity price of a firm in the 95th percentile (by product market share) was, on average, 9% lower than the median firm.

We also find that firms had stronger recoveries if they had higher cash to total asset ratios; earn a greater proportion of their revenue abroad; and historically have had stock prices that move in line with the wider market, as measured by their equity beta. The first of these findings supports recent research by Joseph et al (2019), who find that firms who entered the crisis of 2007/08 with more cash performed better after the crisis than their cash poorer peers.

Moreover, even though previous studies have shown that UK smaller listed firms are more intangible-intensive than larger ones (Haskel (2020)), larger shares of intangible assets seem to play no role in explaining the behaviour of equity prices during the pandemic (second bar from the right in Figure 3). However, intangible-intensive firms with larger market shares fared much better than smaller ones, as can be seen in the right most bar, which shows the combined effect of the firm size and its intensity in intangible assets (labelled II below).

We can think of this as the ‘Netflix effect’. The demand for entertainment is unlikely to have changed during the pandemic and indeed may have increased. However, physical entertainment venues such as cinemas, theatres and museums have had to close. By contrast, large streaming platforms like Netflix can expand their operations serving more customers at minimal marginal costs due to the scalability of intangible assets; it costs Netflix little to provide a movie to one or thousands more customers. This may also explain why its share price grew by more than 50% during our study period (though not in our sample of firms). More generally, firms with higher shares of intangibles may be more agile than those with capital fixed in physical space.

Figure 3: Effect on equity price recovery (trough to peak) of firm characteristics


Our analysis comes with several caveats. For example, equity prices are not always a true reflection of the state of a firm. Market microstructure, liquidity issues and other phenomena can push the equity price of firms away from fundamentals. Even so, our analysis helps us understand how the stock prices of UK firms responded differentially during the pandemic. The cross-sectional nature of the analysis and data limitations leave many questions unanswered and several mechanisms unexplored.

Possible venues for further investigation include a finer characterisation of firms’ exposure to foreign markets as well as the roles of intangibles in shaping corporate agility. In particular, smaller firms are more domestically focused than larger ones on average. Imports and the trading network in which firms are embedded crucially shape how they are affected by global disruptions such as Covid.

Finally, large firms are increasingly organised in highly complex (business) groups (Altomonte, Ottaviano and Rungi (2018)). The operational efficiency of these groups crucially hinges on the degree of decentralisation of managerial decisions, with more decentralised firms being more recession-proof (Aghion et al (2017)). To that extent that investors consider firms managerial practices as an intangible asset (see Damodaran (2012)), their degree of management decentralisation could help explain the differential effects in equity prices between small and large firms.

Despite the fact that our rather coarse-grained analysis only focuses on the tiny, but influential, group of listed firms in an advanced economy, our results may be indicative for deeper changes in the economy related to firm size and business models. For instance, our results point to the resilience of smaller, potentially more specialised, firms to large shocks, while larger firms concentrating on scalable ‘intangible activities’ fared well too. Understanding these differentiated reactions and to generalise them from the rather particular impact of a pandemic provides a rich set of questions for future research.

Tommaso Aquilante works in the Bank’s Structural Economics Division, David Bholat and Andreas Joseph work in the Bank’s Advanced Analytics Division, Riccardo M Masolo works in the Bank’s Monetary Policy Outlook Division, Tim Munday works at Oxford University and David van Dijcke works at the University of Michigan.

If you want to get in touch, please email us at or leave a comment below.

Comments will only appear once approved by a moderator, and are only published where a full name is supplied. Bank Underground is a blog for Bank of England staff to share views that challenge – or support – prevailing policy orthodoxies. The views expressed here are those of the authors, and are not necessarily those of the Bank of England, or its policy committees.

Covid-19 briefing: Covid-19 crisis, climate change and green recovery

Published by Anonymous (not verified) on Mon, 09/11/2020 - 8:00pm in

Misa Tanaka

The Coronavirus pandemic and measures to contain contagion had far reaching consequences on economic activities, which also led to a sharp fall in CO2 emissions. This has sparked new debate about how the recovery from the crisis could be made compatible with the Paris climate goals. In this post, I survey the emerging literature on the link between the economic recovery from the aftermath of the pandemic and climate change.

Covid-19 crisis and greenhouse gas emissions

A by-product of the widespread lockdown was a sharp fall in CO2 emissions. Using a confinement index to capture the extent to which different policies affect emissions and daily activity data for six economic sectors (power, industry, surface transport, public buildings and commerce, residential and aviation), Le Quéré et al (2020a) estimate that global CO2 emissions declined by 17% in April relative to mean 2019 levels. The reduction in surface transport was the biggest contributor to this decline. In a more recent update, however, Le Quéré et al (2020b) find that, by early June, CO2 emissions recovered to levels which are only 5% below the 2019 levels as confinement measures were eased. They estimate that the impact on 2020 annual emissions will depend on the duration of the confinement, with a low estimate of -4% relative to 2019 levels if pre-pandemic conditions return by mid-June, and a high estimate of -7% if some restrictions remain worldwide until end-2020. The latter figure is broadly in line with the IEA (2020)’s forecast of an 8% reduction in CO2 emissions in 2020.

Large economic shocks can have a lasting impact on emissions if they lead to a structural change. Hanna, Xu and Victor (2020) find that, out of the five major global economic shocks since the 1973 oil crisis, four of them were followed by a slower growth of CO2 emissions than before the shock. The only exception was the 1998 Asian Crisis which was followed by a decade of rapid industrial expansion in China, fueled by coal. They also note that CO2 emissions growth halved to 1.6% per year in the decade which followed the 2007-08 Global Financial Crisis, and observe that 15% of the global stimulus funding after the crisis went into developing and deploying green technologies. The drop in CO2 emissions in 2020 – which is likely to be the largest since the Second World War – will itself ameliorate future warming. But the long-term impact of Covid-19 shock on climate change will depend on whether the policies that support the economic recovery are consistent with curbing the CO2 emissions path. Thus, to meet the Paris climate goals, the post-Covid economic recovery will need to achieve less carbon-intensive output growth and job creation than in pre-Covid days.

The role of central banks and green recovery

Governments and central banks have responded to the economic fallout by implementing large fiscal and monetary stimulus packages. Hepburn et al (2020) conduct a survey of fiscal measures adopted by G20 countries by April, and judge that 92% of them are consistent with maintaining the status quo with regard to green-house gas emissions, while 4% will reduce emissions and 4% will increase them. They argue that near-term fiscal policy choices will determine the long-term trend in CO2 emissions and identify five policies to support a sustainable recovery: clean physical infrastructure investment; building efficiency retrofits; investment in education and training to address immediate unemployment from Covid-19 and structural unemployment from decarbonisation; natural capital investment; and clean R&D. Krebel et al (2020) make recommendations for green investment which is more specifically tailored to the UK context.

Several central banks have taken steps to help ensure that financial institutions take into account climate-related risks in their decisions. But some have argued that central banks should do more to align their policies with climate goals. For example, Dikau, Robins and Volz (2020) review policy actions taken by central banks and supervisors since the onset of the Covid crisis and conclude that their responses to Covid-19 have not taken account of climate change or wider sustainability goals. They propose a number of ways in which central bank policies could be made ‘greener’. The proposed measures include i) adjusting the eligible collateral pool, collateral haircuts and collateral valuation to account for climate-related risks; ii) aligning asset purchases with the Paris climate goals, e.g. by decreasing the share of assets exposed to climate-related transition risks in corporate debt purchases; iii) calibrating prudential policies for climate-risk exposures, e.g. by distinguishing low-carbon and high-carbon assets in calibrating risk weights for capital requirements; and iv) adopting sustainable and responsible investment principles for portfolio management.

An immediate question is whether implementing such measures at a time of turbulence could have unintended consequences. But a deeper issue is that most central banks do not have a mandate to engineer structural changes to transition the economy towards a low-carbon growth trajectory. Thus, unlike governments, they do not have at their disposal targeted tools that are designed to induce such structural changes. Several papers have argued that using prudential policy and monetary policy tools for the purpose of engineering such a structural change could run the risks of creating unwanted side-effects, and compromising central banks’ ability to achieve their primary aims. For example, Batten, Sowerbutts and Tanaka (2016) argue that calibrating prudential policy tools to climate-related risks does not necessarily achieve the aim of cutting CO2 emissions because, unlike carbon tax, it cannot be targeted to emission intensive activities. Campiglio et al (2018) also noted that deploying monetary policy tools for the purpose of engineering a low-carbon transition could potentially compromise their effectiveness in meeting the price stability objective. 

That said, highly-rated financial instruments issued by other entities which do have a mandate and tools to fund a low-carbon transition have been used in some central banks’ monetary policy and financial market operations. For example, eligible securities for the ECB’s Public Sector Purchase Programme includes securities issues by the European Investment Bank, which is committed to increasing the share of its financing dedicated to climate action and environmental sustainability to reach 50% of its operations in 2025.

Central banks have also invested their foreign exchange reserves into the green bond fund managed by BIS Asset Management. This fund consists of US dollar-denominated green bonds issued by sovereign and quasi-sovereign entities and now has a market value of over US$1 billion. Thus, Bolton et al (2020) suggest that central banks could play a role in coordinating their own actions with a broad set of measures to be implemented by other players, including governments, the private sector, civil society and international community.


There is a growing consensus among many central banks that they should factor in climate-related financial risks in their operations, and substantial progress has been made in this direction in recent years. Nevertheless, central banks’ responses to Covid-19 crisis attracted renewed calls that their policies should also be calibrated to help support climate goals. The existing literature, however, cautions that most central banks do not have a mandate to engineer a low-carbon transition, and thus the tools at their disposal may not best suited for this purpose. The lesson from this literature seems to be that any role of central banks in supporting a green recovery will need to be defined within an overall policy strategy to engineer a low-carbon transition.

Misa Tanaka works in the Bank’s Research Hub.

If you want to get in touch, please email us at or leave a comment below.

Comments will only appear once approved by a moderator, and are only published where a full name is supplied. Bank Underground is a blog for Bank of England staff to share views that challenge – or support – prevailing policy orthodoxies. The views expressed here are those of the authors, and are not necessarily those of the Bank of England, or its policy committees.

Financial shocks, reopening the case

Published by Anonymous (not verified) on Thu, 22/10/2020 - 7:00pm in

David Gauthier

Since the tumultuous events of 2007, much work has suggested that financial shocks are the main driver of economic fluctuations. In a recent paper, I propose a novel strategy to identify financial disturbances. I use the evolution of loan finance relative to bond finance to proxy for firms’ credit conditions and single out the shocks born in the financial sector. I apply and test the method for the US economy. I obtain three key results. First, financial shocks account for around a third of the US business cycle. Second, these shocks occur around precise events such as the Japanese crisis and the Great Recession. Third, the financial shocks I obtain are predictive of the corporate bond spread.

Looking for the financial shock

What makes it so difficult to isolate economic fluctuations born in the financial sector? Why not just use the usual credit spreads and asset prices to proxy firms’ credit conditions? 

The reason is simple. Because virtually all shocks propagate via credit conditions, this makes credit spreads and asset prices responsive to pretty much all economic and non-economic events and, as such, quite arduous to interpret. Another muddling factor to be considered is the difficulty to observe credit conditions (see for instance Romer and Romer (2017)). Raising debt requires borrowers’ compliance with countless binding agreements. If credit costs decrease while loan covenants tighten up, have credit conditions eased or not?

I use an off-the-shelf dynamic stochastic general equilibrium (DSGE) model to illustrate these points. I show that under general conditions, shocks shifting firms’ credit conditions through second-round effects are indistinguishable from shocks that directly impinge on credit conditions. Following Uhlig and De Fiore (2011), I then extend the model so that firms can fund production using either loans or bonds. Loan funding is more expensive but allows for renegotiation depending on borrowers’ productivity. 

This new framework has a decisive implication: because a sudden change in credit conditions drives a wedge between the costs of bonds and loans, financial shocks are now the only type of disturbances causing opposite movements in the volumes of the two types of funding.

Let the data speak

Based on this finding from the DSGE model, I use sign-restriction techniques within a simple VAR model to capture financial shocks and identify the sources of US economic fluctuations. More specifically, I identify financial shocks as the only type of disturbances that entail opposite movements in loan and bond volumes. The VAR model allows me to study the responses in investment, prices, and the policy rate to financial shocks. Despite imposing only a minimal set of restrictions on financial shocks, I find they imply impulse responses in line with predictions from various DSGE models for all variables.

Figure 1: Historical shock decomposition for US GDP

Note: Contribution of the different structural shocks to output fluctuations. Grey areas correspond to NBER recession dates.

Figure 1 displays the historical shock decomposition for US GDP between 1985 and 2018. Financial shocks weigh on output growth during the Japanese banking crisis of the early 90s’ and the Russian crisis of 1998. While they constitute one of the leading forces driving US fluctuations, other disturbances on the demand and supply sides of the economy play a significant role, especially during the Great Recession.

Putting the model to the test

The mechanism of debt arbitrage, firms shifting between bond and loan finance, in the DSGE model is the foundation of our identification strategy. Can we provide some tests for it? The answer is yes. I take advantage of the structural nature of the DSGE model to construct the series of financial disturbances that maximizes the model’s likelihood for various US series over the period 1985 to 2018. I study its implications for credit costs, a series I have ignored so far.

Figure 2 shows our index together with the bond spread for US non-financial corporates. The two series are highly correlated. More importantly, the credit shocks captured within the DSGE model are predictive of the bond spread.

Figure 2: Financial stress and the bond spread

Note: The orange line corresponds to the estimate of the financial shock in the DSGE model. The blue line corresponds to the Moody’s seasoned AAA corporate bond rate minus the federal funds rate. Grey areas correspond to NBER recession dates.

This simple exercise highlights two essential properties for our identification strategy. First, the method yields financial shocks that are related to credit spreads, a necessary condition for any operational measure of financial stress. Second, it suggests an explanation for shifts in borrowing costs based on changes in firms’ debt financing following shocks to credit conditions.

In a nutshell

Identifying financial disturbances is difficult as financial variables can respond to all sorts of events. I propose to bypass this endogeneity issue by using firms’ debt arbitrage to identify financial shocks. I find these shocks account for a large share of the business cycle. The method highlights the importance of firms’ debt arbitrage, both as a gauge for credit conditions and as an explanation for shifts in credit spreads. Some practical advantages of the approach are worth mentioning. First, the method is easy to implement and produces results in line with more involved techniques. Second, it is model-based: the conditions for the validity of the identification scheme are set out clearly and can be modified to investigate its robustness.

David Gauthier works in the Bank’s Research Hub Secondees Division.

If you want to get in touch, please email us at or leave a comment below.

Comments will only appear once approved by a moderator, and are only published where a full name is supplied. Bank Underground is a blog for Bank of England staff to share views that challenge — or support — prevailing policy orthodoxies. The views expressed here are those of the authors, and are not necessarily those of the Bank of England, or its policy committees.

Machine learning the news for better macroeconomic forecasting

Published by Anonymous (not verified) on Tue, 20/10/2020 - 7:00pm in

Arthur Turrell, Eleni Kalamara, Chris Redl, George Kapetanios and Sujit Kapadia

Every day, journalists collate information about the world and, with nimble keystrokes, re-express it succinctly as newspaper copy. Events about the macroeconomy are no exception. So could there be additional valuable information about the economy contained in the news? In a recent research paper, we ask whether newspaper stories could help to predict future macroeconomic developments. We find that news can be used to enhance statistical economic forecasts of growth, inflation and unemployment — but only by using supervised machine learning techniques. We also find that the biggest forecast improvements occur when it matters most — during stressed periods.

Newspaper articles are different from the official data produced by statistical agencies such as the ONS in several respects. Official data, like GDP, have a clear meaning and method of construction, whereas newspaper articles cover everything and anything.

But newspaper articles can potentially augment official statistics in forecasts because of three key properties: they are timely, reflecting developments as they happen; they may affect the economic behaviour of the people reading them; and they cover developments that traditional statistics aren’t designed to tell us about (‘unknown unknowns’ in the words of Donald Rumsfeld). For instance, gathering economic storm clouds could take many forms, but we think that journalists will always write about them if they have the potential to affect the national economy — whether they are captured in national statistics or not.

Policymakers already use a vast range of tools and information, including the official data released by statistical agencies, to make their judgements. But anything that can expand the pool of data further, and so allow them to be better forewarned of what might be ahead, is welcome. Of course, policymakers already read the news and factor it into their judgements — here we are formalising that process using statistical models. However, these models have the advantage that they can ingest more articles than any one person could read.

To test whether newspaper text contains useful information, we took half a million articles from three popular British newspapers — The Daily Mail, The Daily Mirror, and The Guardian — and tried numerous methods to forecast GDP, unemployment, inflation, and more. We chose these three because they broadly reflect the differing styles and readership of UK newspapers, and because they have long back-runs available in digital formats.

To show whether newspaper text contains useful information on its own (for now ignoring the official data), we took some of the most popular ways of turning text into sentiment indices, for example counting positive and negative words using the Loughran and McDonald dictionary (see our working paper for a full list of indices), and applied them to articles in The Daily Mail. In Figure 1, we plot these sentiment indices against indicators often used to gauge sentiment about the economy, for example the Purchasing Managers’ Index or PMI. The blue solid line is the mean of the text-based measures; the dotted line shows the mean of indicators often used in policy; and the pink shaded region shows the minimum to maximum swathe of these.

Figure 1

Figure 1 makes it clear that newspaper text-based sentiment closely tracks other measures of economic sentiment. We also see that it often leads other indicators of sentiment — it anticipates the downturn in sentiment during the 200708 Global Financial Crisis and the subsequent recovery. This is a hint that text might be useful in economic forecasts.

However, newspaper text must provide additional information relative to the standard economic data that is used by statistical models if it is to be useful. Figure 1 just shows that it contains a signal, but it could be the same signal that’s captured by existing data. And, indeed, when we ran forecasting exercises using the popular existing methods of gauging sentiment and uncertainty, we found that the vast majority did not improve on forecasts that took account of standard economic data, which in our case included real output, labour market indicators, inflation, business and household expectations, and more.

So, to get the best out of text, we came up with an alternative based on machine learning. Rather than dictate how sentiment is determined by text, for instance by assigning ‘happy’ a score of +1 and ‘sad’ a score of -1, as is done in most current methods, we fed the counts of many different words related to the economy into a neural network (a type of machine learning algorithm) and let it decide what words to put weight on to forecast the future. We used this trained neural network to forecast economic activity out-of-sample up to 9 months into the future. We found that, across newspapers, across forecasting horizons and across macroeconomic variables, the combination of text, standard economic data and a neural network was able to improve upon similar forecasts that just used the standard data. The performance of the more sophisticated approach was fairly similar regardless of which newspaper the text came from. We tried numerous other machine learning approaches and not all were able to augment forecasts relative to the benchmark — but those that were put a little weight on a lot of different terms from the text. The neural network provided the best overall forecast improvements.

Figure 2

As a simple demonstration, Figure 2 shows an out-of-sample forecast by a neural network of GDP growth at 3 months ahead — here RMSE stands for root mean squared error, and a smaller RMSE means better forecast performance. The neural net uses text from The Daily Mail and GDP from the previous month (using the ONS’ monthly GDP series). The benchmark forecast uses ordinary least squares (OLS) and GDP from the previous period as there is overwhelming evidence that, on average, and across series and time periods, OLS is tough to beat (but the results are the same if the benchmark uses a neural network rather than OLS). The figure shows that adding text to existing data can improve forecast performance.

Exploring the channels behind the success of forecasts that include text in this way is outside of the scope of the research but there is a plausible story that no news is good news and — conversely — bad economic news that’s brewing is news, and journalists will report on it. And, as noted, it’s also possible that — like Keynes’ animal spirits or Shiller’s ideas about irrational exuberance and viral economic narratives — newspapers themselves play a role in forming expectations and shaping economic behaviour.

However, we did explore when it is that newspaper text adds the most forecasting power, and it seems that it’s most potent at times of economic change, for instance during the Global Financial Crisis. So if text is trying to tell a story about an incoming economic storm, it’s worth taking it seriously — and such periods of change and stress are precisely when good economic forecasts matter the most.

US President Bill Clinton once said, “Follow the trend lines, not the headlines” but, with the help of machine learning, perhaps we can do both?

Arthur Turrell works in the Bank’s Advanced Analytics Division, Eleni Kalamara works at King’s College London, Chris Redl works at the IMF, George Kapetanios works at King’s College London and Sujit Kapadia works at the ECB.

If you want to get in touch, please email us at or leave a comment below.

Comments will only appear once approved by a moderator, and are only published where a full name is supplied. Bank Underground is a blog for Bank of England staff to share views that challenge — or support — prevailing policy orthodoxies. The views expressed here are those of the authors, and are not necessarily those of the Bank of England, or its policy committees.

The Coronavirus Recession Is Just Beginning

Published by Anonymous (not verified) on Sat, 03/10/2020 - 2:51pm in

(A couple days ago I gave a talk — virtually, of course — to a group of activists about the state of the economy. This is an edied and somewha expanded version of what I said.)

The US economy has officially been in recession since February. But what we’ve seen so far looks very different from the kind of recessions we’re used to, both because of the unique nature of the coronavirus shock and because of the government response to it. In some ways, the real recession is only beginning now. And if federal stimulus is not restored, it’s likely to be a very deep and prolonged one.

In a normal recession, the fundamental problem is an interruption in the flow of money through the economy. People or businesses reduce their spending for whatever reason. But since your spending is someone else’s income, lower spending here reduces incomes and employment over there — this is what we call a fall in aggregate demand. Businesses that sell less need fewer workers and generate less profits for their owners. That lost income causes other people to reduce their spending, which reduces income even more, and so on.

Now, a small reduction in spending may not have any lasting effects — people and businesses have financial cushions, so they won’t have to cut spending the instant their income falls, especially if they expect the fall in income to be temporary. So if there’s just a small fall in demand, the economy can return to its old growth path quickly. But if the fall in spending is big enough to cause many workers and businesses to cut back their own spending, then it can perpetuate itself and grow larger instead of dying out. This downward spiral is what we call a recession. Usually it’s amplified by the financial system, as people who lose income can’t pay their debts, which makes banks less willing or able to lend, which forces people and businesses that needed to borrow to cut back on their spending. New housing and business investment in particular are very dependent on borrowed money, so they can fall steeply if loans become less available. That creates another spiral on top of the first. Or in recent recessions, often it’s the financial problems that come first.

But none of that is what happened in this case. Businesses didn’t close because there wasn’t enough money flowing through the economy, or because they couldn’t get loans. They closed because under conditions of pandemic and lockdown they couldn’t do specific things — serve food, offer live entertainment, etc. And to a surprising extent, the stimulus and unemployment benefits meant that people who stopped working did not lose income. So you could imagine that once the pandemic was controlled, we could return to normal much quicker than in a normal recession.

That was the situation as recently as August.

The problem is that much of the federal spending dried up at the end of July. And that is shifting the economy from a temporary lockdown toward a self-perpetuating fall in incomes and employment.

One way we see the difference between the lockdown and a recession is the industries affected. The biggest falls in employment were in entertainment and recreation and food service, which are industries that normally weather downturns pretty well, while construction and manufacturing, normally the most cyclical industries, have been largely unaffected. Meanwhile, employment in health and education, which in previous recessions has not fallen at all, this time has declined quite a bit.1

If we look at employment, for instance which is normally our best measure of business-cycle conditions, we again see something very different from past recessions. Total employment fell by 20 million in April and May of this year. In just two months, 15 percent of American workers lost their jobs. There’s nothing remotely comparable historically — more jobs were lost in the Depression, but that was a slow process over years not just two months. The post-World War II demobilization was the closest, but that only involved about half the fall in employment. So this is a job loss without precedent.

Since May, about half of those 20 million people have gone back to work. We’re about 10 million jobs down from a year ago. Still, that might look like a fairly strong recovery.

But in the spring, the vast majority of unemployed people described themselves as on temporary layoff — they expected to go back to their jobs. The recovery in employment has almost all come from that group. If we look at people who say they have lost their jobs permanently, that number has continued to grow. Back in May, almost 90 percent of the people out of work described it as temporary. Today, it’s less than half. Business closings and layoffs that were expected to be temporary in the spring are now becoming permanent. So in a certain sense, even though unemployment is officially much lower than it was a few months ago, unemployment as we usually think of it is still rising.

We can see this even more dramatically if we look at income. Most people don’t realize how large and effective the stimulus and pandemic unemployment insurance programs were. Back in the spring, most people — me included — thought there was no way the federal government would spend on the scale required to offset the hob losses. The history of stimulus in this country — definitely including the ARRA under Obama — has always been too little, too late. Unemployment insurance in particular has historically had such tight eligibility requirements that the majority of people who lose their jobs never get it.

But this time, surprisingly, the federal stimulus was actually big enough to fill the hole of lost incomes. The across-the-board $600 per week unemployment benefit reached a large share of people who had lost their jobs, including gig workers and others who would not have been able to get conventional UI. And of course the stimulus checks reached nearly everyone. As a result, if we look at household income, we see that as late as July, it was substantially above pre-recession levels. This is a far more effective response than the US has made to any previous downturn. And it’s nearly certain that the biggest beneficiaries were lower-wage workers.

We can see the effects of this in the Household Pulse surveys conducted by the Census. Every week since Mach, they’ve been asking a sample of households questions about their economic situation, including whether they have enough money to meet their basic needs. And the remarkable thing is that over that period, there has been no increase in the number of people who say they can’t pay their rent or their mortgage or can’t get enough to eat. About 9 percent of families said they sometimes or often couldn’t afford enough to eat, and about 20 percent of renters said they were unable to pay the last month’s rent in full. Those numbers are shockingly high. But they are no higher than they were before the pandemic.

To be clear – there are millions of people facing serious deprivation in this country, far more than in other rich countries. But this is a longstanding fact about the United States. It doesn’t seem to be any worse than it was a year ago. And given the scale of the job loss, that is powerful testimony to how effective the stimulus has been.

But the stimulus checks were one-off, and the pandemic unemployment insurance expired at end of July. Fortunately there are other federal unemployment supplements, but they are nowhere as generous. So we are now seeing the steep fall in income that we did not see in the first five months of the crisis.

That means we may now be about to see the deep recession that we did not really get in the spring and summer. And history suggests that recovery from that will be much slower. If we look at the last downturn, it took five full years after the official end of the recession for employment to just get back to its pre-recession level. And in many ways, the economy had still not fully recovered when the pandemic hit.

One thing we may not see, though, is a financial crisis. The Fed is in some ways one of the few parts of our macroeconomic policy apparatus that works well, and it’s become even more creative and aggressive as a result of the last crisis. In the spring, people were talking about a collapse in credit, businesses unable to get loans, people unable to borrow. But this really has not happened. And there’s good reason to think that the Fed has all the tools they need if a credit crunch did develop, if some financial institutions to end up in distress. Even if we look at state and local governments, where austerity is already starting and is going to be a big part of what makes this recession severe, all the evidence is that they aren’t willing to borrow, not that they can’t borrow.

Similarly with the stock market — people think it’s strange that it’s doing well, that it’s delinked from the real economy, or that it’s somehow an artificial result of Fed intervention. To be clear, there’s no question that low interest rates are good for stock prices, but that’s not artificial — there’s no such thing as a natural interest rate.

More to the point, by and large, stocks are doing well because profits are doing well. Stock market indexes dominated by a small number of large companies, and many of those have seen sales hold up or grow. Again, so far we haven’t seen a big fall in total income. So businesses in general are not losing sales. What we have seen is a division of businesses into winners and losers. The businesses most affected by the pandemic have seen big losses of sales and profits and their share prices have gone down. But the businesses that can continue to operate have done well. So there’s nothing mysterious in the fact that Amazon’s stock price, for instance, has risen, and there’s no reason to think it’s going to fall. If you look at specific stocks, you see that by and large the ones that are doing well, the underlying business is doing well.

This doesn’t mean that what’s good for the stock market is good for ordinary workers. But again, that’s always been true. Shareholders don’t care about workers, they only care about the flow of profits their shares entitle them to. And if you’re a shareholder in a company that makes most of its sales online, that flow of profits is looking reasonably healthy right now.

So going forward, I think the critical question is whether we see any kind of renewed stimulus. If we do, it’s still possible that the downward income-expenditure spiral can be halted. At some point soon that will be much harder.

“Monetary Policy in a Changing World”

Published by Anonymous (not verified) on Fri, 18/09/2020 - 6:52am in

While looking for something else, I came across this 1956 article on monetary policy by Erwin Miller. It’s a fascinating read, especially in light of current discussions about, well, monetary policy in a changing world. Reading the article was yet another reminder that, in many ways, debates about central banking were more sophisticated and far-reaching in the 1950s than they are today.

The recent discussions have been focused mainly on what the goals or targets of monetary policy should be. While the rethinking there is welcome — higher wages are not a reliable sign of rising inflation; there are good reasons to accept above-target inflation, if it developed — the tool the Fed is supposed to be using to hit these targets is the overnight interest rate faced by banks, just as it’s been for decades. The mechanism by which this tool works is basically taken for granted — economy-wide interest rates move with the rate set by the Fed, and economic activity reliably responds to changes in these interest rates. If this tool has been ineffective recently, that’s just about the special conditions of the zero lower bound. Still largely off limits are the ideas that, when effective, monetary policy affects income distribution and the composition of output and not just its level, and that, to be effective, monetary policy must actively direct the flow of credit within the economy and not just control the overall level of liquidity.

Miller is asking a more fundamental question: What are the institutional requirements for monetary policy to be effective at all? His answer is that conventional monetary policy makes sense in a world of competitive small businesses and small government, but that different tools are called for in a world of large corporations and where the public sector accounts for a substantial part of economic activity. It’s striking that the assumptions he already thought were outmoded in the 1950s still guide most discussions of macroeconomic policy today.1

From his point of view, relying on the interest rate as the main tool of macroeconomic management is just an unthinking holdover from the past — the “normal” world of the 1920s — without regard for the changed environment that would favor other approaches. It’s just the same today — with the one difference that you’ll no longer find these arguments in the Quarterly Journal of Economics.2

Rather than resort unimaginatively to traditional devices whose heyday was one with a far different institutional environment, authorities should seek newer solutions better in harmony with the current economic ‘facts of life.’ These newer solutions include, among others, real estate credit control, consumer credit control, and security reserve requirements…, all of which … restrain the volume of credit available in the private sector of the economy.

Miller has several criticisms of conventional monetary policy, or as he calls it, “flexible interest rate policies” — the implicit alternative being the wartime policy of holding key rates fixed. One straightforward criticism is that changing interest rates is itself a form of macroeconomic instability. Indeed, insofar as both interest rates and inflation describe the terms on which present goods trade for future goods, it’s not obvious why stable inflation should be a higher priority than stable interest rates.

A second, more practical problem is that to the extent that a large part of outstanding debt is owed by the public sector, the income effects of interest rate changes will become more important than the price effects. In a world of large public debts, conventional monetary policy will affect mainly the flow of interest payments on existing debt rather than new borrowing. Or as Miller puts it,

If government is compelled to borrow on a large scale for such reasons of social policy — i.e., if the expenditure programs are regarded as of such compelling social importance that they cannot be postponed merely for monetary considerations — then it would appear illogical to raise interest rates against government, the preponderant borrower, in order to restrict credit in the private sphere.

Arguably, this consideration applied more strongly in the 1950s, when government accounted for the majority of all debt outstanding; but even today governments (federal plus state and local) accounts for over a third of total US debt. And the same argument goes for many forms of private debt as well.

As a corollary to this argument — and my MMT friends will like this — Miller notes that a large fraction of federal debt is held by commercial banks, whose liabilities in turn serve as money. This two-step process is, in some sense, equivalent to simply having the government issue the money — except that the private banks get paid interest along the way. Why would inflation call for an increase in this subsidy?


The continued existence of a large amount of that bank-held debt may be viewed as a sop to convention, a sophisticated device to issue needed money without appearing to do so. However, it is a device which requires that a subsidy (i.e., interest) be paid the banks to issue this money. It may therefore be argued that the government should redeem these bonds by an issue of paper money (or by an issue of debt to the central bank in exchange for deposit credit). … The upshot would be the removal of the governmental subsidy to banks for performing a function (i.e., creation of money) which constitutionally is the responsibility of the federal government.

Finance franchise, anyone?

This argument, I’m sorry to say, does not really work today — only a small fraction of federal debt is now owned by commercial banks, and there’s no longer a link, if there ever was, between their holdings of federal debt and the amount of money they create by lending. There are still good arguments for a public payments system, but they have to be made on other grounds.

The biggest argument against using a single interest rate as the main tool of macroeconomic management is that it doesn’t work very well. The interesting thing about this article is that Miller doesn’t spend much time on this point. He assumes his readers will already be skeptical:

There remains the question of the effectiveness of interest rates as a deterrent to potential private borrowing. The major arguments for each side of this issue are thoroughly familiar and surely demonstrate most serious doubt concerning that effectiveness.

Among other reasons, interest is a small part of overall cost for most business activity. And in any situation where macroeconomic stabilization is needed, it’s likely that expected returns will be moving for other reasons much faster than a change in interest rates can compensate for. Keynes says the same thing in the General Theory, though Miller doesn’t mention it.3 (Maybe in 1956 there wasn’t any need to.)

Because the direct link between interest rates and activity is so weak, Miller notes, more sophisticated defenders of the central bank’s stabilization role argue that it’s not so much a direct link between interest rates and activity as the effect of changes in the policy rate on banks’ lending decisions. These arguments “skillfully shift the points of emphasis … to show how even modest changes in interest rates can bring about significant credit control effects.”

Here Miller is responding to arguments made by a line of Fed-associated economists from his contemporary Robert Roosa through Ben Bernanke. The essence of these arguments is that the main effect of interest rate changes is not on the demand for credit but on the supply. Banks famously lend long and borrow short, so a bank’s lending decisions today must take into account financing conditions in the future. 4 A key piece of this argument — which makes it an improvement on orthodoxy, even if Miller is ultimately right to reject it — is that the effect of monetary policy can’t be reduced to a regular mathematical relationship, like the interest-output semi-elasticity of around 1 found in contemporary forecasting models. Rather, the effect of policy changes today depend on their effects on beliefs about policy tomorrow.

There’s a family resemblance here to modern ideas about forward guidance — though people like Roosa understood that managing market expectations was a trickier thing than just announcing a future policy. But even if one granted the effectiveness of this approach, an instrument that depends on changing beliefs about the long-term future is obviously unsuitable for managing transitory booms and busts.

A related point is that insofar as rising rates make it harder for banks to finance their existing positions, there is a chance this will create enough distress that the Fed will have to intervene — which will, of course, have the effect of making credit more available again. Once the focus shifts from the interest rate to credit conditions, there is no sharp line between the Fed’s monetary policy and lender of last resort roles.

A further criticism of conventional monetary policy is that it disproportionately impacts more interest-sensitive or liquidity-constrained sectors and units. Defenders of conventional monetary policy claim (or more often tacitly assume) that it affects all economic activity equally. The supposedly uniform effect of monetary policy is both supposed to make it an effective tool for macroeconomic management, and helps resolve the ideological tension between the need for such management and the belief in a self-regulating market economy. But of course the effect is not uniform. This is both because debtors and creditors are different, and because interest makes up a different share of the cost of different goods and services.

In particular, investment, especially investment in housing and other structures, is mo sensitive to interest and liquidity conditions than current production. Or as Miller puts it, “Interest rate flexibility uses instability of one variety to fight instability of a presumably more serious variety: the instability of the loanable funds price-level and of capital values is employed in an attempt to check commodity price-level and employment instability.” (emphasis added)

The point that interest rate changes, and monetary conditions generally, change the relative price of capital goods and consumption goods is important. Like much of Miller’s argument, it’s an unacknowledged borrowing from Keynes; more strikingly, it’s an anticipation of Minsky’s famous “two price” model, where the relative price of capital goods and current output is given a central role in explaining macroeconomic dynamics.

If we take a step back, of course, it’s obvious that some goods are more illiquid than others, and that liquidity conditions, or the availability of financing, will matter more for production of these goods than for the more immediately saleable ones. Which is one reason that it makes no sense to think that money is ever “neutral.”5

Miller continues:

In inflation, e.g., employment of interest rate flexibility would have as a consequence the spreading of windfall capital losses on security transactions, the impairment of capital values generally, the raising of interest costs of governmental units at all levels, the reduction in the liquidity of individuals and institutions in random fashion without regard for their underlying characteristics, the jeopardizing of the orderly completion of financing plans of nonfederal governmental units, and the spreading of fear and uncertainty generally.

Some businesses have large debts; when interest rates rise, their earnings fall relative to businesses that happen to have less debt. Some businesses depend on external finance for investment; when interest rates rise, their costs rise relative to businesses that are able to finance investment internally. In some industries, like residential construction, interest is a big part of overall costs; when interest rates rise, these industries will shrink relative to ones that don’t finance their current operations.

In all these ways, monetary policy is a form of central planning, redirecting activity from some units and sectors to other units and sectors. It’s just a concealed, and in large part for that reason crude and clumsy, form of planning.

Or as Miller puts it, conventional monetary policy

discriminates between those who have equity funds for purchases and those who must borrow to make similar purchases. … In so far as general restrictive action successfully reduces the volume of credit in use, some of those businesses and individuals dependent on bank credit are excluded from purchase marts, while no direct restraint is placed on those capable of financing themselves.

In an earlier era, Miller suggests, most borrowing was for business investment; most investment was externally financed; and business cycles were driven by fluctuations in investment. So there was a certain logic to focusing on interest rates as a tool of stabilization. Honestly, I’m not sure if that was ever true.But I certainly agree that by the 1950s — let alone today — it was not.

In a footnote, Miller offers a more compelling version of this story, attributing to the British economist R. S. Sayers the idea of

sensitive points in an economy. [Sayers] suggests that in the English economy mercantile credit in the middle decades of the nineteenth century and foreign lending in the later decades of that century were very sensitive spots and that the bank rate technique was particularly effective owing to its impact upon them. He then suggests that perhaps these sensitive points have given way to newer ones, namely, stock exchange speculation and consumer credit. Hence he concludes that central bank instruments should be employed which are designed to control these newer sensitive areas.

This, to me, is a remarkably sophisticated view of how we should think about monetary policy and credit conditions. It’s not an economywide increase or decrease in activity, which can be imagined as a representative household shifting their consumption over time; it’s a response of whatever specific sectors or activities are most dependent on credit markets, which will be different in different times and places. Which suggests that a useful education on monetary policy requires less calculus and more history and sociology.

Finally, we get to Miller’s own proposals. In part, these are for selective credit controls — direct limits on the volume of specific kinds of lending are likely to be more effective at reining in inflationary pressures, with less collateral damage. Yes, these kinds of direct controls pick winners and losers — no more than conventional policy does, just more visibly. As Miller notes, credit controls imposed for macroeconomic stabilization wouldn’t be qualitatively different from the various regulations on credit that are already imposed for other purposes — tho admittedly that argument probably went further in a time when private credit was tightly regulated than in the permanent financial Purge we live in today.

His other proposal is for comprehensive security reserve requirements — in effect generalizing the limits on bank lending to financial positions of all kinds. The logic of this idea is clear, but I’m not convinced — certainly I wouldn’t propose it today. I think when you have the kind of massive, complex financial system we have today, rules that have to be applied in detail, at the transaction level, are very hard to make effective. It’s better to focus regulation on the strategic high ground — but please don’t ask me where that is!

More fundamentally, I think the best route to limiting the power of finance is for the public sector itself to take over functions private finance currently provides, as with a public payments system, a public investment banks, etc. This also has the important advantage of supporting broader steps toward an economy built around human needs rather than private profit. And it’s the direction that, grudgingly but steadily, the response to various crises is already pushing us, with the Fed and other authorities reluctantly stepping in to perform various functions that the private financial system fails to. But this is a topic for another time.

Miller himself is rather tentative in his positive proposals. And he forthrightly admits that they are “like all credit control instruments, likely to be far more effective in controlling inflationary situations than in stimulating revival from a depressed condition.” This should be obvious — even Ronald Reagan knew you can’t push on a string. This basic asymmetry is one of the many everyday insights that was lost somewhere in the development of modern macro.

The conversation around monetary policy and macroeconomics is certainly broader and more realistic today than it was 15 or 20 years ago, when I started studying this stuff. And Jerome Powell — and even more the activists and advocates who’ve been shouting at him — deserves credit for the Fed;s tentative moves away from the reflexive fear of full employment that has governed monetary policy for so long. But when you take a longer look and compare today’s debates to earlier decades, it’s hard not to feel that we’re still living in the Dark Ages of macroeconomics

First-time buyers: how do they finance their purchases and what’s changed?

Published by Anonymous (not verified) on Wed, 16/09/2020 - 6:00pm in

John Lewis and Fergus Cumming The changing nature of first-time buyers Average first-time buyer (FTB) house prices have risen by 60% over the past 15 years and homeownership has fallen. How did those who bought their first home finance it and how has this changed? i) We find that average incomes of FTBs have risen. … Continue reading First-time buyers: how do they finance their purchases and what’s changed? →

Covid-19 briefing: tracking economic variables in real time

Published by Anonymous (not verified) on Thu, 03/09/2020 - 6:00pm in

Joel Mundy During the current pandemic, economic variables have moved quickly and by large magnitudes. Given the publication lags for official data this has led to a greater emphasis on higher-frequency and/or more timely measures to track the economic impact of the pandemic and gauge the state of the economy in real time. This post … Continue reading Covid-19 briefing: tracking economic variables in real time →

Covid-19 briefing: heterogeneous impacts of the pandemic

Published by Anonymous (not verified) on Mon, 31/08/2020 - 6:00pm in

Andrea Šiško The COVID-19 pandemic has rapidly spawned a literature analysing its impact on macroeconomic aggregates. But there’s also been work that seeks to look at heterogeneity of impacts across industries, households and individuals. This post summarises this literature which seeks to better understand the heterogeneous effects of the pandemic and associated policy responses on … Continue reading Covid-19 briefing: heterogeneous impacts of the pandemic →