Inflation

Error message

Deprecated function: The each() function is deprecated. This message will be suppressed on further calls in _menu_load_objects() (line 579 of /var/www/drupal-7.x/includes/menu.inc).

The Implausibility Of The Chapwood Index

Published by Anonymous (not verified) on Mon, 11/01/2021 - 1:00am in

Tags 

Inflation

The Chapwood Index has become a popular source to cite by hard money proponents who are pushing the line that inflation is really much higher than what government statisticians suggest. It has taken over the limelight from Shadowstats, which pioneered pushing that particular line. Although it is entirely expected that individuals can face cost of living increases that rise faster than official the CPI inflation rate, the levels of inflation suggested by the Chapwood Index do not appear to offer any plausible information about the price level as the concept is used into macroeconomics.Trust, But VerifyEquity analysts can survive being trusting (or even gullible) -- they only need to find a big winner to cancel out losers, and the winners often need to be bought when a firm is still in a early stage of development, requiring optimism about growth prospects for an unproven business. By contrast, fixed income investors only receive a relatively small amount of interest when compared to the capital at risk, so they generally cannot afford any significant credit impairments. (The exceptions would be high yield and emerging market fixed income investors, and distressed debt specialists that invest based on recovery values.) Since it is generally poor form to tell everyone that you are dealing with that you assume that they are lying thieves, one instead generally projects the attitude of "trust, but verify" (using a well known Cold War era phrase).
I have not spent a great deal of time looking at the Chapwood Index (URL: https://chapwoodindex.com/) or the entity behind the calculation, but the website declares that they are a financial firm. Since the Chapwood Index is a form of advertising, it is clear that they have a financial incentive to have the most extreme results possible -- nobody would care if their index suggested that the cost of living is half a percent higher than CPI inflation. The question is: can we verify their results?
Based on what I saw of their website, there does not appear to be a way of validating their numbers. To do so, we need two things.

  1. The prices used for each city for each of the 500 items in the index.
  2. The weighting methodology.

For the latter, they declare the following:

We take the precise price for the same item quarter by quarter and calculate the increase or decrease. We tracked the prices on a quarterly basis and created a weighted index based on price. These items include basically everything that most Americans consume during the regular course of their lives. 

One possible interpretation of this is that the weight is according to price. Roughly speaking, take the price of 500 items and add them up, and see how the total changes each year. This is problematic, as was noted in a Reddit article that I ran into when doing a web search (link). For example, the price of a litre of gasoline where I live is currently just over $1. This would have a small weight on a price-weighted index, but it ignores that I buy many more litres of gasoline in a year than I buy Blackberry services (which amazingly is in the index). However, the word "based" is vague, and the rule they use could be more complex.
Although they offer some examples where certain items rose more than the aggregate CPI (an unsurprising outcome; price changes are not uniform), there does not appear to be any way of validating the numbers. You have to trust the producers of the index.Other Price Indices are Not Transparent EitherBy itself, the lack of access to underlying data is not that unusual -- we cannot see the underlying prices that are used in the CPI calculation. Historically, this would not have been technically feasible, but one could imagine such data being made available at present (probably for a fee). It would certainly be interesting if that data were made available, but doing so might pose issues around confidentiality. It would be possible to reverse engineer pricing strategies used by the retailers being polled (as well as the household expenditure data used for weightings), which might not be desired by the retailers.
Since it is unlikely that any fixed income analysts have the time to trawl through the raw data, there is no constituency pushing for its release. Although stories about governments cooking inflation numbers float around, the way to verify them is to look at them in the context of other economic and price data.
Nobody sensible would attempt to use personal experience with prices to look at global fixed income pricing. For example, I have no way to judge price changes in Toronto or Vancouver -- never mind Dallas, Sheffield, Osaka or Toulouse. Very simply I cannot pretend to know how the cost of living of Americans is evolving on a personal basis, but I can judge whether aggregate price level indices are coherent with other economic data.Size of the DiscrepancyThe Chapwood Index is not actually a single index, rather is given in terms of annual price changes for 50 American cities. The website for some reason compares the national CPI index to their city-level series, seemingly unaware that there are regional CPI series. I will use Boston as an example.
In order to stick to full year data, I will compare the Chapwood data to the Boston CPI data ( which I accessed at this webpage, series IDs: CUURS11ASA0,CUUSS11ASA0) for 2016-2019. The Chapwood data suggests annual inflation rates of 11.5%, 11.1%, 9.9%, 8.7%, whereas the BLS rates are 1.5%, 2.5%, 3.3%, 1.9%. This is an extremely large difference, dwarfing the dispersion between other recognised inflation series.
I will not attempt to aggregate the Chapwood Index, but would note that the lowest 5-year inflation rate (ending mid-2020) they report is 6.9% (Mesa), the median is 9.5%, and the highest is 13.2% (Oakland). 
It would be completely unremarkable that someone could develop a methodology that suggested inflation rates were higher than CPI, since we know that methodology changes have lowered CPI inflation rates. Meanwhile, it would certainly be possible to design an inflation index that has much more volatility than the CPI, which would allow for more divergences on a year-to-year basis. This allows higher inflation rates in some years -- balanced by deflation in other years (which is not happening in the Chapwood data, where only one city had a single inflation print below 6% in 2016-2019.) Nevertheless, I see no way of bridging the gap between the Chapwood Index and the CPI on good faith technical differences -- the compounded growth rate differentials are far too high.Cost of Living Versus Price IndicesAs the Bureau of Labor Statistics notes, the CPI is not a cost-of-living index. They write:

The CPI frequently is called a cost-of-living index, but it differs in important ways from a complete cost-of-living measure. We use a cost-of-living framework in making practical decisions about questions that arise in constructing the CPI. A cost-of-living index is a conceptual measurement goal, however, and not a straightforward alternative to the CPI. A cost-of-living index would measure changes over time in the amount that consumers need to spend to reach a certain utility level or standard of living. Both the CPI and a cost-of-living index would reflect changes in the prices of goods and services, such as food and clothing that are directly purchased in the marketplace; but a complete cost-of-living index would go beyond this role to also take into account changes in other governmental or environmental factors that affect consumers' well-being. It is very difficult to determine the proper treatment of public goods, such as safety and education, and other broad concerns, such as health, water quality, and crime, that would constitute a complete cost-of-living framework. Since the CPI does not attempt to quantify all the factors that affect the cost-of-living, it is sometimes termed a conditional cost-of-living index.

The creators of the Chapwood Index by contrast came up with an overly simple approach to calculating the "cost of living," and made decisions that would obviously increase the rate of inflation.
A key example is the inclusion of what they label as "Federal" in their list of 500 index items. Since they also include "state" and "property" in the list, this presumably refers to taxes. If we lump income taxes paid by a household as part of the "cost of living" it is clear that we should expect that the taxes will rise faster than the cost of goods and services, even in a magical world where there are no relative price shifts.

  1. Due to productivity, average wages should rise faster than the price of final output (otherwise all real income growth is captured by capital, which is not predicted by almost any economic theory, nor shows up in the data). By implication, there should be rising real consumption by households (the magnitude of the rise in the standard of living is a point of debate).
  2. The way seniority is treated in the market place, an individual's wages is expected to rise faster than the average. (If we compared "seniority cohorts" over time, each cohort should have wage growth in line with the average.)
  3. Income taxes feature rising marginal rates, and tax brackets are not indexed.

The net result of these factors is that an individual's income tax payments should rise faster than the cost of goods and services. Since the Chapwood Index providers offer no information on the details of their methodology, we have no way of knowing how they treated this effect.
On a personal basis, the "cost of living" is very much dependent upon life decisions. Income differentials are growing, and goods and services aimed at the wealthy have typically seen price rises that are faster than those consumed mainly by the poor (although there are exceptions, such as various medicines in the United States). If you pursue a lifestyle that is filled with markers of class position, your personal cost of living is likely to have risen faster than average. Based on some Chapwood Index components -- private school, cat grooming (spoiler: cats groom themselves), luxury box rental -- such items are probably overweighted, given the price-weighting scheme allegedly used.Questionable DefinitionsAnother way the Chapwood Index could be constructed to give a high inflation rate is to take advantage of the vagueness of definitions. They specify "laptop computer," which provides almost no information. I looked at the website of a large American electronics store, and they list laptops that ranged in price from $169 to $4199. Unless there is a very specific criteria to ensure comparability from year-to-year (in an environment where laptop models are continuously changing), this leaves a spectacular amount of wiggle room to pick which price to use. (E.g., start at a low end laptop, then slide up the quality scale in later years.)
(This also shows up in "Play Station" [sic], which skates over the fact that normally only the latest versions of PlayStation™ consoles are for sale.)
We have to take on faith that the construction of the Chapwood Index handles the changes to index components in a fashion that does not lead to higher inflation rates.Economic ImplausibilityNow that I covered how someone could end up with a cost of living index that rises faster than CPI, we then turn to the economic implications. The results are implausible, for the exact same reason that the Shadowstats inflation numbers were implausible. There are a large number of critiques of Shadowstats that already exist that outline this argument. (The critiques of Shadowstats methodology details would not be applicable, of course.)
We need to look at how we define the price level from a macro perspective. We say that nominal GDP growth is (roughly) equal to real GDP growth plus the growth in the GDP deflator. The GDP deflator is an economy-wide price level, which can diverge from the price level of final consumption goods and services. However, we would not expect divergences between the economy-wide price level and consumer prices to be sustained indefinitely.  GDP And The Effect Of Inflation HypothesisThe above figure illustrates why the above decomposition matters. The top panel shows that reported nominal GDP growth has typically been in the neighbourhood of 5% during expansions. The growth of the GDP deflator (not shown) results in real GDP growth rates that bonce around 2-3% (also not shown).
What happens if the government was misrepresenting inflation? If we replace the reported GDP deflator with an index that grows at 2% a quarter (implying just over 8% a year inflation, which is probably below what the Chapwood Index implies, although it does not go back to 1995), we see that there is a massive divergence in the level of real GDP. As basic mathematics suggest, it is shrinking rapidly, falling by more than half since 1995.
Obviously, for inflation to be "really" 8%, either one or both of nominal and real GDP numbers have to be also manipulated. (If one looks at national accounts data, that implies that there has to be a lot of series being manipulated, since they all add up to GDP.)
Cross-ValidationWe do not have to take these numbers purely on faith. As noted earlier, the government cannot just manipulate one CPI series -- they have to manipulate everything, since governments offer comprehensive national account data. We can validate the internal consistency of the data by a number of techniques.

  1. International comparisons. In addition to trade data, multinational corporations imply economic linkages across countries. A country that doctored its data would end up with data that is out-of-sync with international peers.
  2. Nominal GDP is equal to nominal gross domestic income (GDI). If we assume that nominal GDP is growing faster than reported, then nominal incomes have to be growing faster than reported as well. Average hourly earnings are reported, and one can compare that to local experience to see how plausible the numbers are. Meanwhile, earnings of corporations are relentlessly scoured by equity analysts that have nothing better to do with their lives. National accounting of profits follow different conventions than financial accounting, but analysts would pick up on corporations in aggregate massively outperforming nationally reported data.
  3. Real GDP is a vague concept that has theoretical issues if you think too much about it. However, it is expected to be highly correlated with real activity variables. Falling real production implies that workers are becoming far less productive, unless one starts inventing alternate employment data that suggests that unemployment is continuously rising beyond what is being reported. Many important real activity variables -- automobile production, oil production, railroad car loadings, even cardboard box production -- are reported by private sector bodies. If the real economy were shrinking, so would those reported figures. (I do not pay for such data sources, but based on my experience from when I did, all of them were coherent with officially reported GDP data.) By implication, a large number of private sector bodies have to be in on the conspiracy to cook internally consistent data.
  4. Many prices (e.g., commodity prices) are either available from market data, or are compiled by trade bodies. These can be compared to official price indices. Since CPI inflation is a traded financial product, there is a lot of interest in doing such comparisons. (For example, private data is narrower, and produced in a more timely fashion.)

All of these data are being scoured by analysts looking for an edge in macroeconomic prediction, and most of these analysts are happy to tell you how smart they are, and how much they love free market economics. Other than the handful of believers of Shadowstats/Chapwood Index, all of these analysts are allegedly being fooled by all of the available data sets.Raw, Stinking, MisinformationThe Chapwood website asserts:

The government’s baseline CPI measure excludes items such as taxes, energy, and food. [emphasis mine] It is clearly manipulated and biased, therefore it is rarely accurate.

This is a line that is popular on fringe hard money websites, but is incorrect. The official CPI used for cost-of-living adjustments is the headline CPI, which includes food and energy. Economists strip out food and energy to calculate "core inflation," which is only used by them for analysis.

Including such a statement is a glaring red flag. 

Concluding RemarksI will summarise my arguments into two key takeaways.

  1. It is entirely possible to define a "cost of living" index that rises faster than reported CPI (especially if one compares a regional price versus a national average). An individual's cost of living is based on their lifestyle choices; if you wish, you can have a very expensive lifestyle.
  2. The Chapwood Index numbers are too far away from published data to be chalked up to technical index construction issues. If we believed that the price level of the economy moved in line with the Chapwood Index, it would imply that government statisticians across the developed world are creating an elaborate fantasy world with thousands of internally consistent economic time series, and the private sector is in on the act, producing data that are coherent with the government's story. 

(c) Brian Romanchuk 2020

Bond investors see through central bank lies and expose the fallacies of mainstream macroeconomics

Published by Anonymous (not verified) on Wed, 06/01/2021 - 4:53pm in

It’s Wednesday and I usually try to write less blog material. But given the holiday on Monday and a couple of interesting developments, I thought I would write a bit more today. And after that, you still get some great piano playing to make wading through central bank discussions worth while. The Financial Times article (January 4, 2021) – Investors believe BoE’s QE programme is designed to finance UK deficit – is interesting because it provides one more piece of evidence that exposes the claims of mainstream macroeconomists operating in the dominant New Keynesian tradition. The facts that emerge are that the major bond market players do not believe the Bank of England statements about its bond-buying program which have tried to deny the reality that the central bank is essentially buying up all the debt issued by the Treasury as it expands its fiscal deficits. This disbelief undermines many key propositions that students get rammed down their throats in macroeconomics courses. It also provides further credence to the approach taken by Modern Monetary Theory (MMT).

Central bankers caught out

For years now, the ECB Board members have been out there trying to deny the obvious.

Economists have joined in the charade because to admit the obvious would expose the whole scam.

While the Bank of Japan began their large bond-buying exercise in the early 2000s, it wasn’t until the GFC that other leading central banks joined the party.

In Europe, the ECB, faced with the insolvency of many Member States of the Economic and Monetary Union (EMU), moved outside the legal limits of its operations under the various Treaties, and, in May 2010, introduced the Securities Markets Program (SMP) whereby they began buying up unlimited volumes of national government debt in the secondary markets.

They clearly should have done that sooner – back in 2008 – and the European Commission should have immediately suspended the Stability and Growth Pact provisions.

At that time, if the ECB had have announced that they would support all necessary fiscal deficits to offset the private spending collapse things would have been somewhat different for the GFC experience of the 19 Member States.

The former decision could have been justified under the ‘exceptional and temporary’ circumstances provision of Article 126 of the TFEU relating to the Excessive Deficit Procedure.

It was clear that the situation was ‘exceptional’ and with appropriate policy action would have been ‘temporary’. The Council had already demonstrated a considerable propensity to bend its own rules.

However, the ECB’s intervention came too late to curb the damage and was accompanied by moronic conditionalities, which morphed into the oppression dished out by the Troika (ECB, EC and IMF) on Greece and other nations suffering massive recessions.

If the SMP had been introduced in 2008 rather than 2010 and without the conditional austerity attached, things would have been much different.

No Treaty change would have been required for either of these ad hoc arrangements to be put in place.

While obviously inconsistent with the dominant neoliberal Groupthink in Europe, these policy responses would have saved the Eurozone from the worst.

Fiscal deficits and public debt levels would have been much higher but, in return, there would have been minimal output and employment losses and private sector confidence would have returned fairly quickly.

The response of the private bond markets would have been irrelevant.

The fact is though, that the ECB was out there buying up the debt issued by the Member States and suppressing bond yields in the process.
2
That began in earnest on May 14, 2010 when the ECB established its Securities Markets Program (SMP) that allowed the ECB and the national central banks to, in the ECB’s own words, “conduct outright interventions in the euro area public and private debt securities markets” (Source).

This was central bank-speak for the practice of buying government bonds in the so-called secondary bond market in exchange for euros, that the ECB could create out of ‘thin air’.

Government bonds are issued to selective institutions (mostly banks) by tender in the primary bond market and then traded freely among speculators and others in the secondary bond market.

The action also meant that the ECB was able to control the yields on the debt because by pushing up the demand for the debt, its price rose and so the fixed interest rates attached to the debt fell as the face value increased.

Competitive tenders then would ensure any further primary issues would be at the rate the ECB deemed appropriate (that is, low).

What followed was pure pantomime.

After the SMP was launched, a number of ECB’s official members gave speeches claiming that the program was necessary to maintain, in the words of Executive Board member, José Manuel González-Páramo during a speech delivered on October 21, 2011 – The ECB’s monetary policy during the crisis:

… a functioning monetary policy transmission mechanism by promoting the functioning of certain key government and private bond segments …

In other words, by placing the SMP in the realm of normal weekly central bank liquidity management operations, they were trying to disabuse any notion that they were funding government deficits.

This was to quell criticisms, from the likes of the Bundesbank and others, that the program contravened Article 123 of the TEU.

In early 2011, the fiscally-conservative boss of the Bundesbank, Axel Weber, who was being touted to replace Trichet as head of the ECB, announced he was resigning, ostensibly in protest of the SMP and the bailouts offered to Greece and Portugal.

On October 10, 2010, Axel Weber, a ECB Executive Board member, told a gathering in New York that the SMP was “blurring the different responsibilities between fiscal and monetary policy.”

Another ECB Executive Board member, Jürgen Stark also resigned in protest over the SMP in November 2011. He clearly understood that the ECB was disregarding the no bailout clauses that were at the heart of its legal existence.

The head of the Bundesbank, Jens Weidmann also was critical.

He knew that the SMP was what he erroneously called “monetary financing”.

In a speech given to the SUERF/Deutsche Bundesbank Conference in Berlin on November 8, 2011 – Managing macroprudential and monetary policy – a challenge for central banks – he noted that:

One of the severest forms of monetary policy being roped in for fiscal purposes is monetary financing, in colloquial terms also known as the financing of public debt via the money printing press. In conjunction with central banks’ independence, the prohibition of monetary financing, which is set forth in Article 123 of the EU Treaty, is one of the most important achievements in central banking. Specifically for Germany, it is also a key lesson from the experience of the hyperinflation after World War I. This prohibition takes account of the fact that governments may have a short-sighted incentive to use monetary policy to finance public debt, despite the substantial risk it entails. It undermines the incentives for sound public finances, creates appetite for ever more of that sweet poison and harms the credibility of the central bank in its quest for price stability. A combination of the subsequent expansion in money supply and raised inflation expectations will ultimately translate into higher inflation.

Whatever spin one wants to put on the SMP, it was unambiguously a fiscal bailout package.

Weidmann was correct in that sense.

The SMP amounted to the central bank ensuring that troubled governments could continue to function (albeit under the strain of austerity) rather than collapse into insolvency.

Whether it breached Article 123 is moot but largely irrelevant.

The SMP reality was that the ECB was bailing out governments by buying their debt and eliminating the risk of insolvency.

The SMP demonstrated that the ECB was caught in a bind.

It repeatedly claimed that it was not responsible for resolving the crisis but, at the same time, it realised that as the currency-issuer, it was the only EMU institution that had the capacity to provide resolution.

The SMP saved the Eurozone from breakup.

And, of course, this sort of behavour by central banks is now the norm.

Which is why a recent article in the Financial Times (January 4, 2021) – Investors believe BoE’s QE programme is designed to finance UK deficit – is interesting.

Note that Chris Giles (one of the co-authors of the article and Economics Editor at the FT) has been tweeting about the big U-turn from the OECD (which I will write about soon) supporting fiscal dominance and rejecting almost all of the claims that that organisation pushed for years about the benefits of austerity.

It seems that Mr Giles, one of the leading austerity proponents in the financial media, is also undergoing somewhat of an epiphany, as are many who are desperately trying to get on the right side of history, as the paradigm in macroeconomics shifts towards an Modern Monetary Theory (MMT) understanding.

The FT article relates to the bond-purchasing program of the Bank of England.

In the same way that the ECB officials denied what they were doing, the Bank of England govrnor claims that their massive quantitative easing program is about ‘inflation control’ – pushing inflation up to its 2 per cent target rate.

Of course, even that sort of reasoning reflects the flaws of the mainstream paradigm.

Central banks have proven incapable of driving up inflation rates as they increase bank reserves because the underlying theory of inflation (Quantity Theory of Money with money multiplier) is inherently incorrect.

But the FT article surveyed “the 18 biggest players in the market for UK government bonds” to see what their understanding of what the Bank of England was up to.

The results were:

1. “overwhelming majority believe that QE in its current incarnation works by buying enough bonds to mop up the amount the government issues and keep interest rates low”.

2. “most said they thought the scale of BoE bond buying in the current crisis had been calibrated to absorb the flood of extra bonds sold this year, suggesting they believe the central bank is financing the government’s borrowing.”

3. “investors place fiscal financing at the centre of their understanding of how QE works, a conviction that has strengthened in the current crisis compared with previous rounds of bond buying.”

The FT article provided this graph, which tells the story:

The bottom line is that the bond market players don’t believe a word the Bank of England bosses are saying.

And this bears on the so-called arguments about central bank credibility, used by the mainstream to suppress fiscal policy and central bank bond buying programs.

Apparently, if the bond markets think the way the central bank is behaving is not ‘credible’ then rising yields and inflation will follow quickly as the markets ignore central bank policy settings.

One has to laugh.

Here we clearly have the major players in the bond markets outrightly disbelieving the central bank statements and clearly understanding exactly what the bank is up to, yet yields stay around zero and there is no inflation remotely in sight.

The FT revealed that the survey respondents also did not believe the line pushed by New Keynesians (mainstream macroeconomists) that the Bank bond purchases would drive inflation up.

One respondent said:

While the idea of raising inflation expectations, to help reach the bank’s target of actual inflation, is very popular, it is arguably less clear to what extent central banks can really influence inflation expectations and actual inflation.

The game is well and truly up!

Music – Hymn to Freedom

This is what I have been listening to while working this morning.

It is a piano piece that I like to sometimes play on my own as a sort of practice routine.

The song – Hymn to Freedom – first appeared on the 1963 album – Night Train (Verve) – released by the incomparable Canadian pianist – Oscar Peterson – with the other players in his trio comprising:

1. Ray Brown (Acoustic double bass).

2. Ed Thigpen (Drums).

The whole album is one of the best out there.

Oscar Peterson wrote the song in support of the civil rights movement in the US in the early 1960s.

On the Freedom reference, which isn’t why I was listening to the album today, I saw that Kamala Harris is once again in the spotlight for a story about her childhood where she appears to rip-off an anecdote from Martin Luther King. It doesn’t augur well for the next administration in the US. See this story if you are interested – Kamala Harris’ ‘Fweedom’ story mirrors MLK account (January 4, 2021).

That is enough for today!

(c) Copyright 2021 William Mitchell. All Rights Reserved.

Transfers And Overheating

Published by Anonymous (not verified) on Tue, 05/01/2021 - 3:12am in

Tags 

Inflation

 Core Inflation, U.S. and Canada
A recent controversy that erupted was the question as to whether transfer payments (such as the $2000 transfer that was debated in the United States) would cause the economy to overheat. There is an interesting issue: even if $2000 is not enough, what about larger amounts? I have severe doubts that anyone could give a useful answer to what appears to be a simple quantitative question, without a country pushing the envelope for the sake of an economic experiment.
Larry Summers raised this question in a recent op-ed that was dunked on by pretty much my entire Twitter timeline. When I read his piece, I have sympathies with the argument that preventing a collapse in unemployment payments would be a higher priority than a fresh transfer payment. That said, there is nothing stopping pursuing both policies. However, I do not have enough information to say what amount would be a sensible amount to pay, so I am not in a position to pass judgement on Summers positions versus his critics.
Nevertheless, it is interesting that mainstream economists cannot easily answer the question of what level of transfer payments would cause overheating. The ability to answer such quantitative questions is exactly why highly mathematical economic theory was pursued, and more qualitative approaches pushed into the dustbin of editors at mainline journals. To be clear, I am not advancing an inflation model that can answer that question, so my concern is not that they cannot give an answer. Rather, the issue is that an inability to provide a quantitative answer is exactly what heterodox critics had been saying for a long time, only to be met by the response "where is your model?"Getting an Answer?I would argue that there are two main approaches to getting a quantitative answer to this question.

  1. Use a model fitted against historical data, and extrapolate the policy change.
  2. Back of the envelope modelling based on a simplified theoretical model.

The issue with the first approach -- fitted models -- is that the fitting has happened in a completely different institutional environment. One could try, but my guess is that it would be a garbage-in, garbage-out exercise. (The justification for that assertion follows from my back-of-the-envelope discussion, which is the remainder of the article.)
The issue with back-of-the-envelope approaches is that each simple theoretical model will give different results. Since it is hard to calibrate against historical data, there is no way to test the candidates in advance. We need to use historical episodes -- and we are getting some data points based on the events of 2020.The Problem of InflationMost empirical analysis of fiscal policy revolves around the issue of multipliers. Different models suggest different multipliers, based on assumed theoretical characteristics of the economy. (In fact, this is what I would suggest.)
Unfortunately, there are two issues with this approach.

  1. The multipliers under current circumstances are likely to be different, given the nature of the current environment.
  2. Even if we get the multiplier correct, that might tell us about nominal GDP over the next few quarters. The relationship between nominal GDP and inflation is not set in stone.

Multipliers are DifferentThe observation that multipliers would be different in the current environment is well known. With activities closed, even if people want to spend on certain goods or services, they cannot. Meanwhile, businesses that are closed by health edicts are not going to hire, no matter the situation of aggregate demand.
Although medical news has deteriorated markedly in the past month in many areas (including my home province), one has sympathies with a story of a rapid demand increase once vaccine rollouts hit (along with the warmer weather in Northern Hemisphere which coincided with school closures and lower infection rates). So, "the" multiplier might change rapidly.Link Between Nominal GDP and Inflation VagueSimple models that rely on a single good (or aggregated continuum of goods) naturally lead to a tight link between production levels and production constraints. In the real world, that linkage breaks down. For example, digital services often have a negligible cost of increased production. (At most, new servers might be needed.) 
This leads to the divergence between post-Keynesian and neoclassical inflation theories. Neoclassical models rely on supply and demand curves, along with some arbitrary price stickiness. Post-Keynesians emphasise that most prices are administered.
Both approaches can come up with an explanation as to why the previous rounds of stimulus have had no inflationary impact (as demonstrated by the chart of U.S. and Canadian core inflation at the top of this article) -- despite the household support programmes involving REALLY BIG DOLLAR AMOUNTS.

  1. Neoclassicals can argue that inflation expectations are anchored because of central bank credibility (or whatever), and so inflation is stable.
  2. Anyone who argues that prices are administered would realise that it makes no sense to raise prices and wages in the middle of the current economic uncertainty. Only sectors with strong supply chain disruptions -- notably, construction -- might entertain it. However, although prices have risen, the construction industry is also using time rationing (delaying projects).

Given that these stories are fairly similar, I see no obvious way to distinguish them. 
Returning to the original question, we need to ask: why would another round of stimulus be any different? So long as the payment is roughly comparable to average households' monthly mortgage or rent payment, it is likely to be absorbed there. Since it is hard to see an immediate hiring binge this far away from vaccine rollouts, second round effects are likely to be muted.
My view is that we could only see more significant effects if the payments are converted to be permanent flows. At which point, those flows need to be compared to regular employment income, and see how significant they are.Concluding RemarksOne can come up with a reasons to not worry about the inflationary impact of one-time payments, particularly when it is hard to spend on many items. There are obviously limits, but the limits may be far higher than what squeamish politicians and policymakers can stomach.
The story is different for permanent flows. It seems reasonable to believe that the U.S. Federal Government could pay all adults $100/month and the inflationary impact would be close to diddly squat (to use the technical econometrics term). This is less clear once we hit the $1000/month threshold, since that is an income that is a significant portion of many household incomes. My argument is that you would need an actual programme to calibrate numbers closer than that, but the key point is that the metric that matters is the comparison to typical income levels.Appendix: Rant about Structural ForcesOne of the more amusing parts of Larry Summers' op-ed is that he flipped from a secular stagnation story to one where inflation could take off like a rocket as a result of a one-time payment. Not exactly a secular force if it can be defeated by a single policy step.
This is the problem with any "secular" inflation story, such as the ubiquitous demographics arguments (that are just dual y-axis charts with average ages and inflation). There are many structural factors that coincided with the inflation round-trip from the 1970s-2020s in most developed economies. However, it is a hard sell that demographics (or whatever) can prevent inflation if fiscal policy ramps up.
(c) Brian Romanchuk 2021

Is the $US900 billion stimulus in the US likely to overheat the economy – Part 2?

Published by Anonymous (not verified) on Thu, 31/12/2020 - 5:36pm in

The answer to the question posed in the title is No! Lawrence Summers’ macroeconomic assessment does not stack up. In – Is the $US900 billion stimulus in the US likely to overheat the economy – Part 1? (December 30, 2020) – I developed the framework for considering whether it was sensible for the US government to provide a $US2,000 once-off, means-tested payment as part of its latest fiscal stimulus. Summers was opposed to it claiming that it would push the economy into an inflationary spiral because it would more than close the current output gap. Today, I do the numbers. The conclusion is that there is more than enough scope for the Government to make the transfers without running out of fiscal space.

Measuring the output gap in the US

The first graph gives you an idea of the real GDP (output) history and the CBO measure of potential GDP. The grey bars are the NBER recessions (peak to trough).

The boxed area is shown in more detail in a later graph.

The thing that stands out in this graph is that the GFC was a very bad recession relative to the previous downturns, of which some were quite serious – such as the 1981-82 recession.

Not only was the length of downturn of the GFC prolonged but the amplitude of the real GDP contraction stands out.

But the other major issue is that CBO estimated that the potential growth path contracted significantly as a result of the prolonged recession.

One of the reasons, potential GDP declines after a recession is if the investment ratio declines significantly.

Investment spending has two impacts:

(a) It adds to current demand for goods and services (capital equipment, etc).

(b) It adds productive capacity to the economy which increases the potential GDP.

So an enduring decline in investment, which commonly drives recessions, not only opens up the output gap, but also reduces potential output.

The impact on potential GDP depends on how deep and how long the recession is.

The GFC was particularly severe.

The decline in the investment ratio as a result of the crisis was substantial and endured for 2 years. It went from 18.2 per cent in the June-quarter 2006 to a low of 12.3 per cent in the December-quarter 2009.

As a result the potential productive capacity of the US contracted somewhat The question is how much?

There are various estimates available but the overall message is that potential GDP fell considerably as a result of the lack of productive investment in the period following the crisis (see below).

It returned slowly (the cyclical response was asymmetric) to 18 per cent by the June-quarter 2018, fell back somewhat and then peaked at 18.4 per cent in the June-quarter 2019.

The current decline due to the pandemic only accentuates the downturn spiral that began in the September-quarter 2019.

The next graph shows the percentage annual change in business investment in the US since 1950.

Now consider the next graph which zooms in on the rectangular area identified in the earlier graph.

To get some idea of what has happened to potential real GDP growth in the US, the next graph shows the actual real GDP for the US (in $US billions) and two estimates of the potential GDP. There are many ways of estimating potential GDP given it is unobservable.

In this blog post – Common elements linking US and UK economic slowdowns (May 1, 2017) – I discussed estimates of potential GDP in the US and the shortcomings of traditional methods used by institutions such as the Congressional Budget Office.

So if you are interested please go back and review that discussion.

The latest CBO estimates, made available through – St Louis Federal Reserve Bank, show why we should be skeptical.

While I could have adopted a much more sophisticated technique to produce the red dotted series (potential GDP) in the graph, I decided to do some simple extrapolation instead to provide one base case.

The question is when to start the projection and at what rate. I chose to extrapolate from the most pre-GFC GDP peak (December-quarter 2007). This is a fairly standard sort of exercise.

The projected rate of growth was the average quarterly growth rate between 2001Q4 and 2007Q4, which was a period (as you can see in the graph) where real GDP grew steadily (at 0.65 per cent per quarter) with no major shocks.

If the global financial crisis had not have occurred it would be reasonable to assume that the economy would have grown along the red dotted line (or thereabouts) for some period.

The gap between actual and potential GDP (red dotted version) in the September-quarter 2020 is around $US3,377.3 billion or around 15.4 per cent.

The green dotted line is the estimate of potential output provided by the US Congressional Budget Office and suggests that the US economy is 3.5 per cent below potential GDP (as calculated by the CBO).

Note that prior to the pandemic, the CBO was estimating that the US economy was operating above its potential limits by 1.2 points (in the December-quarter 2019).

They considered the economy had been operating at over-full capacity since the March-quarter 2018.

It is hard to believe the estimate!

Why?

1. Inflation has shown no signs of accelerating.

2. Inflationary expectations declined over that period.

The following graphs are compiled using the Cleveland Federal Reserve Bank’s – Inflation Expectations data.

The first graph shows the expected price inflation for the next 12 months and for the next 10 years from 1985 to November 2020.

The second graph zooms in on the period one-quarter before the CBO estimated the US economy was ‘overheating’ (that is, producing more than its capacity).

Over that period the inflationary expectations have been trending down and well below 2 per cent, which is a benchmark the Federal Reserve uses to define price stability.

In other words, the market participants have no expectation that inflation is going to rise at all over the next year or over the ten-year period ahead.

Inflationary expectations are benign.

While it might take a few quarters for an over-capacity economy to ‘heat up’, it doesn’t make any sense for the market to systematically believe that inflation will continue to spiral downwards at the same time the economy is operating at more than 1 percentage point above its potential.

3. The US Federal Reserve Open Market Committee (FOMC) probably bought some of the OBS kool-aid and started hiking rates in December 2016. There were 8 subsequent increases before they worked out the economy was not overheating at all and they quickly cut the rate to 0.25 per cent (cutting 2 points).

They had already cut it three times before the pandemic hit.

4. While the unemployment fell to levels not seen since the 1960s, the broader measures of labour underutilisation indicated there was still considerable slack.

Even though the official unemployment rate has been relatively low, the question to ask is this: How much lower would the unemployment rate and the broader underutilisation rate go if the US federal government offered a Job Guarantee on an unconditional basis?

I would bet the answer would be much lower without any inflation acceleration emerging.

The flat wages growth supports that interpretation.

The only group that enjoyed significant wages growth has been at the top-end of the wage distribution (95th percentile and beyond).

The bottom percentiles have barely seen any growth and certainly not sufficient to think of the last few years as being an overheating economy.

Taken together, all the usual indicators suggest that the CBO output gap estimates are inaccurate – probably by several percentage points.

And that inaccuracy is a direct function of the way they define potential GDP and integrate the NAIRU into the estimation process.

We know (and I explain this in more detail in the blog post mentioned above), the CBO base their estimate of Potential GDP on their estimate of the NAIRU – the (unobservable) Non-accelerating Inflation Rate of Unemployment.

This is a conceptual unemployment rate that is consistent with a stable rate of inflation.

The literature demonstrates that the history of NAIRU estimation is far from precise. Studies have provided estimates of this so-called ‘full employment’ unemployment rate as high as 8 per cent or as low as 3 per cent all at the same time, given how imprecise the methodology is.

The former estimate would hardly be considered ”high rate of resource use”. Similarly, underemployment is not factored into these estimates.

The continued slack in the labour market (bias towards low-pay and high underemployment) would lead to the conclusion that the output gap is likely to be somewhat closer to the extrapolated estimate (red dotted line) than the CBO estimate.

However, while the red dotted line may have had some validity as a guide to potential output in the early part of the GFC, it is also clear that with the poor investment response during the GFC that the true potential GDP has fallen off that trend and lies somewhere between the CBO estimate and the crude extrapolation (red dotted line).

Think about the period between 2017 and 2018.

GDP growth steadied after its long recovery and a new trend looked like emerging before the pandemic.

If we extrapolate from that point, based on the average growth from the December-quarter 2015 to the December-quarter 2019 (0.57 per cent per quarter growth, which was below the pre-GFC trend of 0.65 per cent) out to the September-quarter 2020, we get a new line denoted by the red line in the next graph.

You can see that it lies above the CBO potential GDP estimates and closer to but well below the red dotted line (which is not shown here).

The estimated output gaps then – as at September-quarter 2020 – are:

1. Red dotted line – 15.4 per cent.

2. Red line – 5.1 per cent.

3. CBO official – 3.5 per cent.

4. Jobs gap method (see below) – 6.6 per cent.

I would suspect that the truth is somewhere between 1 and 2 but much closer to 4.

The US jobs deficit and the output gap

I updated the participation shifts due to ageing this morning to allow us to decompose the shift in participation into cyclical components and ageing population component.

As the population ages, and older workers have lower participation rates, the aggregate participation rate, which is a weighted average of the individual age cohort participation rates, falls – not because the individual age cohort rates change but that there are more workers in the total with lower rates.

That is, some of the drop in US participation rates over the last 2 decades is due to a compositional effect rather than a cyclical effect (the latter captures workers dropping out of the labour force temporarily when they stop searching as a result of the lack of job opportunities).

My detailed analysis which I will write up in another blog post some time later shows that about 66 per cent of the decline in the participation rate since April 2000 is due to these compositional shifts and 33.8 per cent is due to the economic cycle (output below potential).

The current participation rate of 61.5 per cent is a long way below the most recent peak in January 2007 of 66.4 per cent.

Adjusting for the demographic effect would give an estimate of the participation rate in November 2020 of 65.3 per cent if there had been no cyclical effects.

To compute the job gaps, a ‘full employment’ benchmark of 3.5 per cent is used – which was the low-point rate rate achieved in December 2019 before the pandemic.

I explain in this blog post – US labour market – strengthened in February but still not at full employment (April 13, 2018).

Using the estimated potential labour force (controlling for declining participation), we can compute a ‘necessary’ employment series which is defined as the level of employment that would ensure on 3.5 per cent of the simulated labour force remained unemployed.

This time series tells us by how much employment has to increase in each month (in thousands) to match the underlying growth in the working age population to maintain the 3.5 per cent unemployment rate benchmark.

In the blog post cited above (US labour market – strengthened in February but still not at full employment), I provide more information and analysis on the method.

There are two separate effects:

  • The actual loss of jobs between the employment peak in December 2019 and November 2020 is 9,071 thousand jobs.
  • The shortfall of jobs (the overall jobs gap) is the actual employment relative to the jobs that would have been generated had the demand-side of the labour market kept pace with the underlying population growth and the participation rate adjusted for ageing. This shortfall loss amounts to 5,711 thousand jobs.
  • The total jobs gap is thus 14,782 thousand.

This gives another perspective on what the output gap might be.

We can estimate the extra output that would be forthcoming if these workers were engaged as the current potential by multiplying the jobs gap by the average average productivity per person employed.

The aggregate average productivity is likely to overstate the actual productivity gain from the workers who are currently without work given they typically work disproproportionately in the lower paying jobs I adjust the average productivity to be only 70 per cent of the economy-wide average.

Using that benchmark we get an output gap in the September-quarter of 6.6 per cent.

The $US900 billion package

The stimulus package that the President has signed is included in the – Consolidated Appropriations Act 2021 (all 5,600 pages of it).

Distilling the essential features leads to this summary (which may not be perfect):

1. $US286 billion in direct aid comprising $US600 cheques means-tested (those earning below $US75,000 annually) and weekly unemployment assistance of $US300 per week for 11 weeks (to March 14, 2021).

The $US600 cheque outlays are to be capped at a total of $US166 billion.

2. $US325 billion for small business assistance, including $US284 in foregivable loans .

3. $US82 billion for assistance to help schools.

4. $US54 billion for public-health measures associated with contact tracing and vaccination.

5. $US45 billion for transportation – assisting airline payroll support and public transport.

6. $US25 billion in rental assistance.

7. $US13 billion for food-assistance (SNAP).

8. $US10 billion to help pre-school assistance, child-care.

9. $US15 billion to aid the arts sector (theatres, cinemas, music venues, etc).

10. $US10 billion to US Postal service.

11. $US7 billion to expand high-speed internet access to low-income families.

12. $US35 billion for development of wind, solar and other clean energy projects.

So the stimulus package is a mixture of individual transfers, government consumption expenditure, loans to businesses and transfers to state and local governments.

The composition is important because it has implications for the multiplier effects (see below).

With more than a third being in the form of loans to business, which may or may not be re-cycled into the spending stream, the direct injection from the package will be considerably lower than $US900 billion.

Further, as I pointed out in this blog post – Tax cuts are unlikely to work at present and are less effective than government spending increases (October 1, 2020) – the evidence from the cash handouts under the CARES Act (the first stimulus package) indicated that:

1. “Only 15 percent of recipients of this transfer say that they spent (or planned to spend) most of their transfer payment, with the large majority of respondents saying instead that they either mostly saved it (33 percent) or used it to pay down debt (52 percent).”

2. “U.S. households report having spent approximately 40 percent of their checks on average, with about 30 percent of the average check being saved and the remaining 30 percent being used to pay down debt.”

3. “Little of the spending went to hard-hit industries selling large durable goods (cars, appliances, etc.). Instead, most of the spending went to food, beauty, and other non-durable consumer products that had already seen large spikes in spending even before the stimulus package was passed because of hoarding.”

I provided more analysis in the blog post cited.

So it is questionable how much of the direct assistance via individual transfers will actually be spent, given the fact that American households are carrying excessive debt levels.

The fiscal multiplier

The next step is to think about the spending multiplier, which measures the impact on final output and income of a unit change in an injection of spending.

So, if the multiplier is 1.5, then a $1 injected into the spending stream, say, by government fiscal stimulus, will lead to a final increase in GDP of $1.50.

The value depends on the proportion of each extra income received that is spent on consumption, the tax rate structure and the extra dollars that leak out to import spending.

An extra dollar in the hands of a low-income worker is likely to be most spent whereas the same is not true for an extra dollar given to a high-income earner.

Further, during crises, when borrowing capacities fall and assets cannot be easily sold, the proportion increases.

Please read my blog post – Spending multipliers (December 28, 2009) – for more discussion on this point.

This recent ‘FRBSF Economic Letter’ published by the Federal Reserve Bank of San Francisco – The COVID-19 Fiscal Multiplier: Lessons from the Great Recession (May 26, 2020) – provides some interesting discussion of the likely multiplier effects of the COVID stimulus packages.

They note that the composition of the stimulus package influences the value of the multiplier:

1. Individual transfers – trigger high on-spending.

2. Government consumption – “the multiplier may be as high as 1.5 to 2.0”.

3. Transfers to state and local governments – “unlikely that states would use any of their federal transfer funds to finance tax cuts or pay down preexisting debt.”

Overall, they conclude that the multiplier is likely to be “near or above 1” which means that a fiscal stimulus will be expansionary.

They conclude that:

Overall, the evidence suggests that the output boost from the current fiscal response is likely to be large.

So is the $US900 billion too expansionary?

We now have some concept of how far the US economy is from its potential.

I might also project that the output gap will widen in the next quarter or so given the horrific state of the pandemic in the US.

A $US900 billion spending increasing would represent about 1.2 per cent of annual GDP.

There are two other uncertainties:

1. We do not know the time profile of when the spending will enter the spending stream.

2. We do not know how much of the $900 billion will enter the spending stream given the fact that a significant proportion of the CARES stimulus was saved or used to pay down debt and that more than a third of this stimulus package is in the form of loans.

What does an percentage output gap translate into billions?

1. 5.1 per cent is $US3,950 billion over a year.

2. 6.6 per cent is $US5,267 billion over a year.

3. CBO 3.5 per cent is $US2,684 billion over a year.

So even if all the $US900 billion entered the spending stream over 2021 and the CBO’s estimates of the output gap were accurate, there would be no likelihood that the stimulus (multiplied up) would drive the economy into a state of overheating (that is, exhausting its productive potential).

Clearly, given that the full $US900 billion is not going to enter the spending stream over 2021 and the output gap is likely to be closer to 6.6 per cent than the CBO’s estimate, then the fiscal space is clearly able to accommodate the $US600 payment and would also be able to accommodate the revised proposal for $US2,000 per eligible person.

Distributional matters

Which means there would also be scope to address the distributional anomalies that concern progressives within the current fiscal space (see my discussion in Part 1) without any offset fiscal measures to reduce the net spending injection.

That is the topic of another blog post though.

Conclusion

Larry Summers was not correct in his macroeconomic analysis and that is where the attacks should have begun.

I hope I have given a feel for how analysis of this type is an art form rather than an exact science.

There are assumptions, uncertainties, and complete unknowns that enter into the exercise, which ultimately has to be distilled down to a numbers game.

Some heterodox economists argue that because of the endemic uncertainty this sort of analysis is meaningless.

They overlook the fact that governments have to outlay dollars to motivate changes in the aggregates and so it is better to provide some numerical scope for what those outlays will deliver.

A good analysis also has an eye to sensitivity of settings, which I have demonstrated by considering the reasonable range of output gaps, for example.

Then you have to consider the consequences of error.

In this case, the consequences of not providing enough fiscal stimulus are much more significant than providing too much.

In the former case, a shortfall of spending will leave unemployment higher than otherwise necessary and sacrifice billions in foregone income.

The consequences of pushing spending a little more than is necessary is some price pressures, which are less destructive than unemployment and can be easily dealt with.

Happy New Year – as Victoria again closes its border to NSW after the latter has failed to deal with the latest virus numbers and spread the infection to Melbourne from Sydney.

More flights I had planned for tomorrow have just been cancelled.

2021 looking like 2020.

That is enough for today!

(c) Copyright 2020 William Mitchell. All Rights Reserved.

My Op Ed in the mainstream Japanese business media

Published by Anonymous (not verified) on Wed, 23/12/2020 - 12:54pm in

Tags 

Inflation, Japan

It’s Wednesday and my blog-light day. Today, I provide the English-text for an article that came out in the leading Japanese business daily, The Nikkei yesterday on Modern Monetary Theory (MMT) and its application to the pandemic. Relevant links are provided in the body of the post. The interesting point I think is that ‘The Nikkei’ is the “the world’s largest business daily in terms of circulation” and has clear centre-right leanings. The fact that they are interested in disseminating ideas that run counter to the mainstream narrative that the centre-right politicians have relied on indicates both a curiosity that is missing in the conservative media elsewhere, and, the extent to which MMT ideas is becoming more open to serious thinkers. I have respect for media outlets that come to the source when they want to motivate a discussion on MMT rather than hire some hack to write a critique, which really gets no further than accusing MMT of being just about money printing.

Article in Nikkei

I was invited recently by the economics editor of – Nikkei, Inc – which own ‘The Nikkei’ newspaper, the prestigious Japan-based daily which is “the world’s largest business daily in terms of circulation”, to write an opinion piece about Modern Monetary Theory (MMT) and its application to the pandemic.

Nikkei, Inc also own the London-headquartered Financial Times and various TV networks in Japan.

The article – 雇用創出・教育へ投資果敢に コロナ危機と財政膨張 – appeared yesterday (December 22, 2020) in their – 経済教室 (Economic Class) columns where authors help readers understand new ideas.

The title literally translated means – Investing in job creation and education boldly – Corona crisis and fiscal expansion

Thanks very much for their invitation.

For Japanese readers, here is a snapshot (picture) of the article. The link above will take you to a readable format of the article (which is behind a paywall).

For non-Japanese readers, the English text that I sent the publisher follows. I had 1250 words. The heading that follows was my original title, although when one writes Op Ed articles, the title is also the choice of the editors of the media outlet.

Also, I can update those who are interested and have sent me inquiries – the Japanese government fellowship that I have been awarded for 2021 and which I was due to take up residency in Japan in February 2021 will be deferred until later in 2021 as a result of all the travel restrictions associated with the coronavirus.

I anticipate to be in Japan for two months from October 2021 if all things go to plan. It will be a very exciting period to work with researchers in Kyoto and Tokyo and interact with the policy debate.

My Japanese is improving!

The paradigm shift in macroeconomics – Modern Monetary Theory

The sequence of crises – 1991 recession, the Global Financial Crisis (GFC) and, now the pandemic – has exposed the deficiencies of mainstream macroeconomics and focused attention on Modern Monetary Theory (MMT), as the rival paradigm.

We have entered a new era of fiscal dominance as policy makers discard their reliance on monetary policy to stabilise economies.

Even the IMF acknowledge that “Central banks … have facilitated the fiscal response by … financing large portions of their country’s debt buildup”, which has “helped keep interest rates at historic lows”.

This policy shift is diametric to what mainstream macroeconomists have been advocating for decades as they repeatedly warned that high deficits and public debt levels and large-scale central bank bond purchases would lead to disaster.

However, their predictions have been dramatically wrong and provide no meaningful guidance to available fiscal space nor the consequences of these policy extremes for interest rates and inflation.

Japan’ experience is illustrative.

It embraced the neoliberal private credit excesses in the 1980s, which caused the 1991 property collapse.

The government’s response pushed economic policies to the extreme of conventional limits – continuously high deficits, high public debt, with the Bank of Japan buying much of it.

Mainstream economists predicted rising interest rates and bond yields, accelerating inflation and, inevitably, government insolvency.

All predictions failed.

Japan has maintained low unemployment, low inflation, zero interest rates and strong demand for government debt (see graphic).

I provided this graph to The Nikkei which summarises the macroeconomic fiscal and monetary data.

Similar predictions were made during the GFC, when many governments followed the Japanese example.

They again failed because the underlying economic theory is wrong.

Austerity-obsessed governments, applying that flawed theory, forced their nations to endure slower output and productivity growth, elevated and persistent unemployment and underemployment, flat wages growth, and rising inequality.

MMT has consistently advocated a return to fiscal dominance and disabuses us of the claims that fiscal deficits are to be avoided.

MMT defines fiscal space in functional terms, in relation to the available real productive resources, rather than focusing on irrelevant questions of government solvency.

Most of what has been written in the media about MMT is misleading and seems content to dismiss it as ‘money printing’, which to the critics leads to inflation.

However, rather than being some sort of policy regime, MMT is, in fact, a lens which provides a superior understanding of our fiat monetary systems, particularly the capacities of currency-issuing governments.

To operationalise an MMT understanding as policy one has to overlay a set of values. Most policy choices that are couched in terms of ‘financial constraints’ are, in fact, just political or ideological choices.

The fiat money era began when US dollar gold convertibility ended in 1971.

This opened fiscal space for currency-issuing governments because the Bretton Woods requirements to offset spending with taxation and/or borrowing were no longer binding under floating currencies.

There is thus no financial constraint on government spending. Unlike households who use the currency and are financially constrained, a currency-issuing government cannot run out of money.

It can buy any goods and services that are available for sale in its currency including all idle labour.

Mass unemployment becomes a political choice.

MMT allows us to traverse from obsessing about financial constraints and all the negative narratives about the need to ‘fund’ government spending, to a focus on real resource constraints.

It focuses on how policy advances desired functional outcomes, rather than the size of the deficit.

To maximise efficiency, government should spend up to full employment.

The fiscal outcome will then be largely determined by non-government saving decisions (via automatic stabilisers).

The only meaningful constraint is the ‘inflationary ceiling’ that is reached when all productive resources are fully employed.

Mainstream economists will claim that they knew this all along because, in their words, governments can always ‘print money’, but, should not, because it is inflationary.

MMT demonstrates how this reasoning and terminology is erroneous.

First, all government spending is facilitated by central banks typing numbers into bank accounts.

There is no spending out of taxes or bond sales or ‘printing’ going on.

Elaborate accounting and institutional processes, which make it look as though tax revenue and/or debt sales fund spending, are voluntary arrangements that function to impose political discipline on governments.

Second, all spending carries an inflation risk.

If nominal spending growth outstrips productive capacity, then inflationary pressures emerge. Government spending can always bring idle resources back into use, without generating inflation.

At full employment, a government wishing to increase its resource use has to reduce non-government usage.

By curtailing private purchasing power, taxation, while not required to fund spending, can reduce inflationary pressures.

Many commentators then argue that MMT economists are naïve because it is politically difficult to impose higher taxes (or spending cuts) to tackle inflation.

But governments regularly use discretionary fiscal cuts under the guise of fiscal consolidation. Japan’s sales tax increases exemplify this.

The harsh austerity that many nations introduced after the GFC is another example.

Further, many inflationary triggers do not require contractionary demand policies: for example, changes to administrative prices (indexation arrangements), and, regulative shifts when market power is abused.

Governments should always anticipate sectoral bottlenecks and implement skill development policies to sustain labour requirements.

Large-scale public works that add useful infrastructure can also be scaled to meet changing economic conditions.

Overall, given the scale of the crisis, there is little prospect of excessive demand driving inflation in the coming years.

There is also little prospect of a 1970s-style stagflation.

Governments wrongly responded to the politically-motivated supply-shock (oil price hikes) with contractionary demand policies when they should have fast-tracked energy substitution technologies.

More extreme supply shocks explain the hyperinflation of 1920s Germany and modern-day Zimbabwe, both of which are regularly, but erroneously, claimed to demonstrate the danger of fiscal deficits.

The Zimbabwean government’s confiscation of highly productive white-run farms to reward soldiers, who had no experience in farming, caused farm output to collapse, which then damaged manufacturing.

Even with fiscal surpluses, the hyperinflation would have occurred such was the depth of the supply contraction.

What about quantitative easing?

When central banks embarked on large scale government bond buying programs, mainstream economists predicted accelerating inflation.

Indeed, central bankers justified QE as a way to boost inflation, which has been systematically below their price stability targets.

While these bond-buying programs effectively funded fiscal deficits, no inflation resulted because any increase in spending did not push the economy beyond resource constraints.

Mainstream macroeconomics also assert that bank lending is reserve-constrained and competition by government deficits for scarce savings drives up rates and ‘crowds out’ more productive private spending.

In the real world, bank lending is only limited by the credit-worthy borrowers that seek loans. Further, central bankers can maintain yields and interest rates at very low levels indefinitely to suit their policy purposes.

Bond markets can only determine yields if governments allow them to.

MMT economists have always considered that fiscal surplus obsessions were unjustified and underpinned destructive policy interventions. Now, as never before, the scale of the socio-economic-ecological challenges before us requires a rejection of these obsessions.

Meeting these challenges will require significant fiscal support over an extended period. Any premature withdrawal of support will worsen the situation.

MMT shows that the problem into the future will not be excessive deficits and/or public debt or inflation.

Rather, the challenge is to generate productivity innovations derived from investment in public infrastructure, education and job creation as our societies age.

Mainstream economic theory has shown time and again that it cannot effectively tackle the challenges facing the world today.

Music – Broken Wings – John Mayall

This short song – Broken Wings – was Track 5, side B on the – The Blues Alone – album, which John Mayall released in November 1967 on the Ace of Clubs Records label.

This was the first full album I ever bought in my early teenage years with my paper round money. The Ace of Clubs label was great because they were (from memory) $1.99 instead of the usual price for a long playing disk of $4.95.

The album followed pretty well straight after he released – Crusade – his third studio effort which marked the appearance of Mick Taylor (just before he took up with the Rolling Stones).

Mayall had a habit of falling out with his guitar players or bassists – Eric Clapton left the Bluesbreakers, then his replacement, the mighty Peter Green left, bass player John McVie left, and then Mick Taylor. Quite a lineup. Fortunately the dissidents (Green and McVie) formed the first version of Fleetwood Mac and we know what that produced before the band turned to pop.

On this album, John Mayall played all the instruments barring the drums, which were provided by the magnificent Keef Hartley whose own recording career is worth getting acquainted with.

Not only did John Mayall play most of the instruments, he also designed the sleeve notes and cover art for the album, which featured himself playing what I believe was a home made guitar.

So on The Blues Alone he could only really argue with himself.

I loved this track (still do) and fell in love with Hammond B3 organs and always wanted one except I never had a place big enough to store it and guitars took my attention away.

Anyway, mellow out and enjoy the artistry.

That is enough for today!

(c) Copyright 2020 William Mitchell. All Rights Reserved.

How Did Market Perceptions of the FOMC’s Reaction Function Change after the Fed’s Framework Review?

Published by Anonymous (not verified) on Fri, 18/12/2020 - 11:00pm in

Ryan Bush, Haitham Jendoubi, Matthew Raskin, and Giorgio Topa

LSE_2020_market-perceptions_topa_460

In late August, as part of the Federal Reserve’s review of Monetary Policy Strategy, Tools, and Communications, the Federal Open Market Committee (FOMC) published a revised Statement on Longer-Run Goals and Monetary Policy Strategy. As observers have noted, the revised statement incorporated important changes to the Federal Reserve’s approach to monetary policy. This includes emphasizing maximum employment as a broad-based and inclusive goal and focusing on “shortfalls” rather than “deviations” of employment from its maximum level. The statement also noted that, in order to anchor longer-term inflation expectations at the FOMC’s longer-run goal, the Committee would seek to achieve inflation that averages 2 percent over time. In this post, we investigate the possible impact of these changes on financial market participants’ expectations for policy rate outcomes, based on responses to the Survey of Primary Dealers (SPD) and Survey of Market Participants (SMP) conducted by the New York Fed’s Open Market Trading Desk both shortly before and after the conclusion of the framework review. We find that the conclusion of the framework review coincided with a notable shift in market participants’ perceptions of the FOMC’s policy rate “reaction function,” in the direction of higher expected inflation and lower expected unemployment at the time of the next increase in the federal funds target range (or “liftoff”).

Recent Desk survey data show shifts in expectations for inflation and unemployment rate at liftoff

One can think of the reaction function as a description of how monetary policy settings are adjusted in response to evolving conditions and expectations. Gauging market perceptions of the reaction function can help in interpreting signals from financial markets and assist policymakers in understanding the extent to which market perceptions align with their policy intentions. However, survey questions that only elicit expectations for the path of the federal funds target range reflect both perceptions of the policy rate reaction function as well as expectations for future economic conditions. For example, two respondents may have different views on the likely path of the target range because of differing economic outlooks but nevertheless have similar views on how the FOMC would set the target range in response to given levels of inflation, unemployment, or other variables.

When the target range is at the effective lower bound (ELB), a useful way to try to isolate respondents’ views on the reaction function is to directly ask for their estimates of the values of various economic indicators that will prevail at the time of liftoff. Recent iterations of the SPD and SMP have included just this type of question; respondents have been asked for the most likely level of the unemployment rate, headline twelve-month personal consumption expenditures (PCE) inflation, the labor force participation rate, and the level of real GDP at the time of the next increase in the target range.

Examining changes in responses to these questions before and after the conclusion of the framework review provides compelling evidence of a shift in perceptions of the Committee’s reaction function. As the chart below shows, among market participants, the median expectation for headline twelve-month PCE inflation at liftoff increased from 2.0 percent in the July survey to 2.3 percent in the September survey, while the median expectation for the unemployment rate at liftoff fell from 4.5 percent to 4.0 percent in the respective surveys. Similarly, among primary dealers, the median expectation for headline twelve-month PCE inflation at liftoff increased from 2.2 percent in the July survey to 2.3 percent in the September survey, while the median expectation for the unemployment rate at liftoff fell from 4.5 percent to 4.0 percent. Across both surveys, the interquartile ranges of respondents’ expectations moved in similar directions as the medians.

While various other developments could have influenced perceptions of the FOMC’s reaction function, market commentary and qualitative responses to other questions in the surveys suggest that these shifts were primarily in response to the outcome of the framework review.

How Did Market Perceptions of the FOMC’s Reaction Function Change after the Fed’s Framework Review?

Recent responses imply higher inflation, lower unemployment rate at liftoff compared to pre-2015

To put the levels and changes shown above into historical context, we compare recent SPD data with responses to similar questions asked in the SPD from 2011 to 2015, during the prior period in which the target range was set at the ELB. (We focus on SPD data because the SMP was launched in 2014, limiting historical comparisons.)

As shown in the chart below, we find that responses from recent surveys indicate higher expected inflation and a lower expected unemployment rate at liftoff than in the previous ELB episode, and that the recent changes in these expectations are notable by historical standards. Specifically, during the earlier period, the median estimate for headline twelve-month PCE inflation at liftoff averaged about 2 percent until mid-2014 and then declined sharply as expectations persisted for the Committee to raise the target range at a time when energy prices had depressed headline inflation. In light of the impact of energy prices on headline inflation at that time, it is helpful to also compare recent results to estimates for core twelve-month PCE inflation at liftoff during the previous ELB episode, given it should be less impacted than headline inflation by transitory shocks to energy prices. Although only a subset of questions near the end of that period asked for estimates of core PCE inflation at liftoff, median responses for that indicator were less volatile and averaged around 1.4 percent—considerably below the median of 2.3 percent in the September 2020 SPD following the outcome of the framework review. Meanwhile, the median estimate for the unemployment rate at liftoff gradually declined from 8 percent in 2011 to 5 percent just before the 2015 liftoff—higher than the median estimate of 4 percent in the September 2020 survey.

How Did Market Perceptions of the FOMC’s Reaction Function Change after the Fed’s Framework Review?

Expectations may continue to evolve following changes to forward guidance

In sum, responses to the SPD and SMP suggest that the Fed’s announcement of the outcome of the monetary policy framework review induced a notable shift in market participants’ perceptions of the FOMC’s reaction function. On the whole, this shift appears large when compared to prior historical experience at the ELB, when a similar survey question was also asked of SPD respondents.

It is important to note that soon after the conclusion of the framework review and after the September surveys, the FOMC introduced changes to the guidance in its post-meeting statement on the overall stance of monetary policy and path of the target range, including the conditions the Committee expects to prevail at the time of liftoff. These changes, which (as the Chair explained in his September press conference) were guided by the outcome of the framework review, may have further shaped market perceptions of the policy reaction function. Indeed, though changes in views were dispersed, the median across combined responses from the November SPD and SMP indicated a slight further increase in the median expectation for the level of inflation at the time of liftoff, while the median expected unemployment rate was unchanged. Going forward, data on perceptions and expectations such as those contained in the Desk’s surveys are likely to prove useful in judging how views about the reaction function evolve.

Ryan BushRyan Bush is a manager for policy and market analysis in the Federal Reserve Bank of New York’s Markets Group.

Haitham JendoubiHaitham Jendoubi is a senior associate for policy and market analysis in the Bank’s Markets Group.

Matthew RaskinMatthew Raskin is a vice president for policy and market analysis in the Bank’s Markets Group.

Giorgio TopaGiorgio Topa is a vice president in the Bank’s Research and Statistics Group.

How to cite this post:

Ryan Bush, Haitham Jendoubi, Matthew Raskin, and Giorgio Topa, “How Did Market Perceptions of the FOMC’s Reaction Function Change after the Fed’s Framework Review?,” Federal Reserve Bank of New York Liberty Street Economics, December 18, 2020, https://libertystreeteconomics.newyorkfed.org/2020/12/how-did-market-per....




Disclaimer

The views expressed in this post are those of the authors and do not necessarily reflect the position of the Federal Reserve Bank of New York or the Federal Reserve System. Any errors or omissions are the responsibility of the authors.

Recovery In U.S. Inflation Breakevens Not Surprising

Published by Anonymous (not verified) on Thu, 17/12/2020 - 3:16am in

Tags 

Inflation

The rise in in breakeven inflation in the United States is not particularly surprising, as it is just a return to projecting previous conditions forward. This could be mistaken, but at this point, the burden of proof is upon those who are pushing a story that inflation will be markedly higher or lower. We can easily see a replay of the dynamics of the past cycle, with President-Elect Biden pursuing similar policies to the administration where he was Vice President, and the Republican Party attempting scorched-earth debt scare tactics to force fiscal tightening.
For readers who are not familiar with the concept of breakeven inflation, it is the nominal yield on a conventional Treasury bond less the quoted yield (indexed, or "real yield") on a matched maturity inflation-linked Treasury (TIPS). I discuss the mechanics of breakeven inflation in my handbook, Breakeven Inflation Analysis. The key observation is that the breakeven inflation rate is (roughly) equal to the required rate of inflation for the TIPS and conventional bond to have the same rate of return when held to maturity -- that is, the TIPS is priced at a break even level versus the nominal benchmark. 10-year U.S. inflation breakevenThe figure above shows the 10-year breakeven inflation rate (based on the Fed H.15 report). We see that the breakeven inflation rate collapsed both this year and in 2008-2009. This was related to a drop in oil prices (an oil price bubble popped in 2008, and West Texas Intermediate futures prices briefly went negative this year). Furthermore, in 2008 being long inflation-linked bonds was an extremely crowded trade (bond market participants were also caught up in the oil price bubble), and the Financial Crisis resulted in forced liquidations (since a breakeven trade is a balance sheet intensive long/short position). Inflation-linked bonds -- like all risky asset classes -- dropped to prices that made no fundamental sense. (This may have happened to a smaller extent in 2020, but I no evidence either way).
Since inflation-linked bonds are based on overall ("headline") CPI, the inflation payout is greatly influenced in the near run by changes in energy prices. On longer horizons, it is difficult for energy prices to significantly diverge from the overall price index, so averages end up looking closer to core (ex-food and energy) inflation that economists tend to focus on. (Some believers in Peak Oil argue that energy prices will spiral out of control, but I believe that overstates what will happen to measured prices.)
The easiest way to deal with the effect of energy prices on expected inflation is to break the investment horizon into two intervals.

  1. The near run outlook, based on the current state of the business cycle.
  2. A longer run average, which is an estimate of where policymakers would like inflation to be.

As an example, I would break up the 10-year inflation breakeven into the first 5 years (the spot 5-year breakeven inflation rate), plus the 5-year breakeven starting 5 years forward. (The advantage of that separation is that the 10-year breakeven is just the average of those two levels.) 5Y/5Y TIPS Breakeven versus historical averageThe figure above shows the 5-year, 5-year breakeven inflation rate (I used the series calculated by the Fed) compared to the 10-year moving average of the CPI inflation rate. (A cleaner series would be the 10-year annualised inflation rate, but this series is easier for readers to replicate.)  As seen, the forward rate just reverted to slightly above the historical average. If one wanted to use the 10-year moving average as an adapted expectations, this is consistent with there being a small inflation risk premium in TIPS (actual inflation needs to be slightly higher than the expected level of inflation to break even).
(The issue of risk premia in inflation-linked bonds is a thorny question. There are two plausible, countervailing forces. Inflation-linked bonds are less liquid, and they should be cheaper than conventional government bonds -- which are a pricing benchmark. Conversely, there is a mismatch between the supply and demand of inflation protection, with a far greater demand for protection than there is supply. This ought to result in inflation-linked bonds being expensive -- breakevens above inflation. Since we do not know exactly what market participants expectations for inflation are, and there is only a single spread between the observed breakeven and this forecast level, it is extremely hard to come up with a plausible model for the premium. That said, this is only a concern for academics or central bankers -- as an investor, you know what your own inflation forecast is, and you can set your own target risk premium.)
Although adaptive expectations are sneered at by modern academics, their use is justifiable as long as one understands the risks. The choice of 10 years in the moving average was not accidental -- 10 years largely matches the length of recent expansions. The cyclical average tells us where overall policy settings (i.e, fiscal, monetary, and regulatory policy) have let inflation settle. Unless those policy preferences change, the best guess is that future inflation will tend to settle near those levels. Meanwhile, if one looked at a long history of the average of inflation, we see that the average has settled into a relatively small range since the early 1990s.
I am extremely unconvinced by stories that there has been a secular change in policymakers' views towards inflation. As such, I see no reason to argue against where forwards are. As for the near-run inflation forecast, even if I wanted to be a forecaster (which I am not), my feeling is that the outlook is murky. The current situation is somewhat unprecedented in the modern era, with a mixture of unemployment alongside industries hitting capacity constraints. Meanwhile, it is hard to judge how large the post-vaccine bounce will be. 
Finally, one popular belief is that "breakevens have nothing to do with inflation." That is the type of thinking that drove the forward breakeven inflation rate down to 1.5%, which generated a good trading opportunity for anyone who disagreed with that sentiment.
(c) Brian Romanchuk 2020

The Paradox of the Two Knights

Published by Anonymous (not verified) on Mon, 07/12/2020 - 12:01am in

By Carlos García Hernández

Article originally published in Spanish by RedMMT here

Two knights chess pieces on a chess boardPhoto by Hassan Pasha on Unsplash

Marx argues that any economic system based on private ownership of the means of production is doomed to disappear, in order to give rise to a superior system without private ownership of the means of production. The reason for this collapse of capitalist society and the subsequent emergence of socialism is to be found in the Law of the Tendency of the Rate of Profit to Fall. According to this law, the contradictions among social classes within the capitalist system can only tend to increase, because in order to be able to compete against each other, the capitalists have to increase their rate of profit permanently. This is only possible through increased exploitation of the workers, which results in ever lower wages and ever longer working hours. However, this impoverishment of wage-earning labour comes up against a limit, “capital itself”. Below this limit, a crisis of demand occurs after which workers cannot subsist, as they cannot buy enough of the goods they produce. Moreover, the few capitalists who exist at this stage go out of business. This is how the edifice of capitalism collapses and a better, sustainable system without private ownership of the means of production, called socialism, emerges, whose higher phase is called communism. “Development of the productive forces of social labour is the historical task and justification of capital. This is just the way in which it unconsciously creates the material requirements of a higher mode of production”.

No one took Marx’s work more seriously than John Maynard Keynes. That is why he realised that history was faced with a fundamental question: Is what Marx says true? In order to answer this question, we have to pay attention to the logical form of the Law of the Tendency of the Rate of Profit to Fall. The logical form that this law takes is the modus tollens ((P→Q) ʌ ¬Q) → ¬P, if private property exists (P) then the system collapses (Q); if the system does not collapse (¬Q) then private property does not imply the collapse of the system (¬P).

Certainly, during the decades between the publication of Marx’s Capital and the time of Keynes, there had been dramatic developments. While capitalism did not seem to be on the verge of collapse in many places on the planet, the communist revolution had triumphed in the Soviet Union, in 1929 the US economy had entered a major recession following the analyses of the demand crises set out by Marx and Germany was being torn between Nazism and communism. In the eyes of an anti-socialist like Keynes, the situation was highly worrying. However, to prove the falsity of the premise P→Q it is enough that this premise is false in one single case. This led Keynes to study what, in his eyes, was Marx’s main contribution, his analysis of the monetary circuit. If there was any contradiction in Marx’s approaches, it had to be there.

To get to the monetary circuit, Keynes had to go first through Marx’s theory of labour. In fact, he accepted it as true and wrote: “It is my belief that much unnecessary perplexity can be avoided if we limit ourselves strictly to the two units, money and labour, when we are dealing with the behaviour of the economic system as a whole”. From an anthropological point of view, Keynes has no problem accepting that human labour is the only source of value and that commodities receive the value from human labour, just as cold water receives the heat from a hot object when the object is immersed in it. The contradiction is found in the next step, when Marx analyses the monetary circuit in a monetary economy of production in which there is a shift from having producers who exchange their commodities for money in order to buy other commodities (c – m – c) to having capitalists who accumulate money in order to buy commodities which they then sell for a larger amount of money thanks to the surplus value extracted from the workers (m – c – M). This step is explained by Marx as an extension of barter, he mentions Robinson Crusoe and takes a metallist stance with regard to money, this is where Keynes finds the contradiction he was looking for, in the exogenous commodity money presented by Marx, and it is from here that he builds his work.

First, he denies exogenous money and defends the endogenous character of fiat money. Thus, in his “Treatise on Money,” he presents the creation of money as an endogenous part of the economic cycle and denies the loanable funds theory. The money is mostly created by banks lending to their customers regardless of their money reserves, as they can always turn to the Central Bank as a lender of last resort. The rest of the money is created directly by the states through the coordination of the Central Bank and the Treasury to carry out public spending. In both cases, the money is denominated in national currency and comes from the Central Bank, which does not depend on its gold or silver reserves, tax collection or debt issuance to issue national currency.

This raises a political question, again not analysed by Marx. If in the “Treatise on Money” the creation of money is presented as a decision made by banks when they are faced with an opportunity to make profits, in the “General Theory”, the creation of money is also presented as a political decision by governments to create aggregate demand through public spending via deficits. Without this ability of governments to create aggregate demand through public deficits, not only would Marx’s prophecy about the collapse of capitalism be fulfilled, but it would also be impossible to explain the very birth of the monetary economies of production. The monetary circuit is not born of barter, neither of gold nor of silver, but of credit granted by governments as sovereign issuers of national currency, which in today’s societies passes through the existence of central banks.

Keynes’ recipe is simple: to avoid the demand crises described by Marx, states must create aggregate demand through public expenditure in order to maintain levels of full employment and levels of welfare that do not lead to the collapse of capitalism. This is the recipe that Franklin Delano Roosevelt applied, in contact with Keynes himself, to set in motion the New Deal that brought the US out of the Great Recession of 1929, and it is also the recipe that was applied in the West after the Second World War to build up welfare and social protection systems. Here are two cases in which P→Q is not fulfilled and therefore the premise enunciated by Marx is refuted.

 

Chess board showing the two knights endgame

 

In my opinion, it is essential for the left to draw lessons from all this accumulated experience. I like to pose the question as the end of a chess game in which only the two kings and two knights of the same colour are on the board. In these cases, the game is considered a draw. However, a paradox occurs. Theoretically, it is still possible to reach a checkmate position as the one shown in the diagram. However, the game is considered a draw because a checkmate position like the one shown in the diagram is only obtained if the player who only has his king collaborates with the player who has both knights. If the player with only the king on the board does not cooperate, checkmate is impossible. The same applies to the question at hand. The states that allow the existence of private ownership of the means of production collapse if they are incompetently governed. States with private ownership of the means of production do not collapse if they create sufficient aggregate demand through their spending policies via public deficits and if they intervene in the economy through a strong public sector presence that guarantees high levels of welfare for their citizens. The collapse of capitalism in Russia and the rise of National Socialism in Germany were only possible because of the manifest incompetence of Tsar Nicholas II and Kaiser Wilhelm II respectively; likewise, the collapse of capitalism in the USA due to the Great Recession of 1929 was only prevented by public intervention through the New Deal. We are currently witnessing a similar event in the European Union. To combat the COVID pandemic, the EU has decided to suspend its absurd and reactionary deficit limits. It has done so because the pandemic threatened the existence of capitalism itself in the EU. As soon as the pandemic passes, the EU will re-impose its deficit limits so that its model of mercantilist capitalism continues to guarantee the privileges of the export elites and continues to condemn the working majority to suboptimal living standards.

Does this mean that we should renounce socialism, that the attempt at a socialist transformation of the economy and society as a whole is a waste of time? Not at all. To renounce socialism is to renounce a better life. Keynes himself writes: “it is an outstanding characteristic of the economic system in which we live that, whilst it is subject to severe fluctuations in respect of output and employment, it is not violently unstable. Indeed, it seems capable of remaining in a chronic condition of subnormal activity for a considerable period without any marked tendency either towards recovery or towards complete collapse. Moreover, the evidence indicates that full, or even approximately full, employment is of rare and short-lived occurrence. Fluctuations may start briskly but seem to wear themselves out before they have proceeded to great extremes, and an intermediate situation which is neither desperate nor satisfactory is our normal lot”. We socialists cannot resign ourselves to living under this order of things. To conclude this article I would like to present very succinctly a proposal, which I have elsewhere called fiat socialism, as an alternative path towards the socialist transformation of society and which I hope will soon take the form of a book so that it can be presented more widely.

To begin with, the two opponents must shake hands and accept that the game is a draw. Socialists have to accept that there are no historical laws and capitalists have to accept that the most they can offer are unsatisfactory solutions to major social problems. Then the pieces have to be put in place to start a new game.

We have to start asking ourselves, what does it mean that there are no historical laws? Historical laws like the one expounded by Marx conceive history as the development of a law towards whose essence (idea) humanity flows over time. Therefore, the essence (the idea) is placed at the end of a process towards which humanity tends inexorably. This scheme followed by Marx was adopted first by Aristotle and then by Hegel as opposed to Plato and Kant respectively and must be abandoned by the left. This means that we must return to Kant and abandon Hegel. There are no inexorable historical laws governing the destiny of humanity; the human being is not an actor whose mission is to hasten the birth pangs of a new society predetermined from the beginning of history. On the contrary, we must start from a primaeval idea from which our political activity is derived. This entails establishing our goals as the premises of our politics. We believe that these premises are correct, but we cannot be sure of this and we do not even know if they will become a reality. The truth or falsity of our premises will have to be corroborated by free and democratic elections. In the specific case of socialism, we have to start from a definition that does not reflect any inexorable historical law but the ends we defend. I propose that those ends should be those set out by the American economist Stuart Chase, who in his 1942 book “The Road We Are Traveling” says that all economic policy must meet five fundamental objectives:

  • guaranteed and permanent full employment
  • full and prudent use of natural resources
  • a guarantee of food, shelter, clothing, health services and education to every citizen
  • social security in the form of pensions and subsidies
  • a guarantee of decent labour standards.

If we look at all but the second point, which has to do with the preservation of nature, these have been fundamental axes of socialism in all its forms, from the socialism of the Soviet Constitution as the first binding legal document that included guaranteed work, to the socialism of the welfare systems, which both in the former socialist bloc and in the advanced societies of the West guaranteed access to the services set out by Chase. In fact, it was the defence of these five points that enabled the left to survive the demise of the Soviet Union, and in terms of environmental protection, the left has already incorporated the Green New Deal to its ideas. Furthermore, these five points were fundamental in non-Soviet socialist experiences of great importance that we cannot forget, such as that of Mohammad Mosaddeq in Iran, the Arab socialism of Gamal Abdel Nasser and the Ba’ath Party, the experience of Olof Palme in Sweden, of Thomas Sankara in Burkina Faso, of Patrice Lumumba in Congo, of Salvador Allende in Chile, of Evo Morales in Bolivia, of Jaime Roldós Aguilera in Ecuador, of Maurice Bishop in Grenada or of Hugo Chávez in Venezuela, among others. It is, therefore, these five points and their achievement that we must call socialism, not a system in which, regardless of the achievement of these five points, but in accordance with a historical law, there is no private ownership of the means of production or in which the surplus value is equal to zero. Both the size of the private sector and the levels of surplus value must be decided by the citizenry democratically. There will be places where, in accordance with the different cultural traditions of their constituents, socialist organizations will advocate the achievement of these five points through greater or lesser involvement of the private sector. Likewise, workers, in return for guaranteed work, good wages, adequate social benefits and not having to take the risks involved in private entrepreneurship, will tolerate a greater or lesser degree of surplus value. What is important is that they have in their hands the democratic mechanisms necessary to control these levels. In my view, the best mechanism for this are the job guarantees based on employment buffer stocks advocated by modern monetary theory.

This leads us to the last section of this article, the one devoted to the method. In my view, the best method to achieve the five goals of socialism outlined above without creating runaway inflation is modern monetary theory. As its founder, the Australian economist Bill Mitchell, says, this economic school is not a political regime, but a lens through which economic science can be focused in the right way. Modern monetary theory tells us the method for employing all the real resources of the economy while maintaining price stability. The full employment of these resources can be directed towards the objectives that are decided politically. My proposal is to direct the full employment of real resources to the five objectives set out above and to give this employment the name of socialism.

I am therefore of the opinion that a new definition of socialism should be put forward. Currently, the Spanish Royal Academy of Language defines socialism as: “Social and economic system based on collective or state ownership and administration of the means of production and of distribution of goods”. This definition is filled with notions from historical laws, whose existence we have previously denied. I, therefore, propose that a new definition of socialism be: Social and economic system which, through modern monetary theory, provides guaranteed and permanent full employment, full and prudent use of natural resources, a guarantee of food, shelter, clothing, health services and education to every citizen, social security in the form of pensions and subsidies, and a guarantee of decent labour standards.

As I have said, I have called this in the past fiat socialism, but it could also be called flexible socialism, as it frees socialism from the rigidities imposed by historical law. This socialism will take different forms in different places, it accepts that socialist organizations are not exempt from making mistakes, it will involve different levels of participation by the private sector, as well as different levels in the gross operating surpluses, and it is open to processes of improvement in order to mobilize real resources in the best possible way to achieve the five ends of socialism. Only one rigidity is established: monetary sovereignty. Modern monetary theory is only valid in monetary systems where the state is the sovereign issuer of its currency and where there is an appropriate coordination between the Central Bank and the Treasury. If Archimedes in ancient Greece said give me a point of support and I will move the world, a socialist Archimedes would say give me monetary sovereignty and I will build you socialism. Without the point of support of monetary sovereignty, the proposal of socialism as explained above is not possible. In most parts of the world, this is not a problem because monetary sovereignty is already in place, but in the European Union this is the main stumbling block to any socialist transformation of the economy. Therefore, in Spain, the first step towards socialism would be to abandon the European Union and the euro.

Euro delendus est.

Carlos García Hernández – editor of Lola Books publishing house.

 

Share

Tweet

Whatsapp

Messenger

Share

Email

reddit

Pinterest

tumblr

Viber icon
Viber

The post The Paradox of the Two Knights appeared first on The Gower Initiative for Modern Money Studies.

As We Exhaust Our Oil, It Will Get Cheaper But Less Affordable

Published by Anonymous (not verified) on Fri, 04/12/2020 - 3:34am in

It was a bet heard around the world. Okay, that’s an exaggeration. It was a bet heard mostly by academics and sustainability buffs. But still, it was a bet … and it was important.

The year was 1980. The players were biologist Paul Ehrlich and business professor Julian Simon. The two had conflicting ideas about where humanity was headed. Ehrlich, the author of the 1968 book The Population Bomb, thought humanity was headed for a Malthusian catastrophe. Simon thought the opposite. Humanity, he argued, was itself The Ultimate Resource. Because humanity’s genius knew no bounds, Simon proclaimed that we could think our way out of any problem.

The debate between Ehrlich and Simon was fundamentally about resource scarcity. What’s interesting, though, is that their actual wager wasn’t about any physical measure of resource reserves. Their wager was about prices.

Simon challenged Ehrlich to bet on the price of raw materials. Pick any ‘non-government controlled’ resource, Simon said, and he’d bet that the price would decrease over time. Ehrlich chose five metals — copper, chromium, nickel, tin, and tungsten. If their inflation-adjusted prices went down by 1990, Ehrlich would lose. If metal prices went up, Ehrlich would win.

Ehrlich lost.

Actually, Ehrlich lost the bet the moment he entered it. Ehrlich was concerned with the physical exhaustion of resources. Had he bet Simon on any physical measure of resource reserves, Ehrlich would have won. (The Earth isn’t making more metal, so we’ve been exhausting our supply since day one.) Instead, Ehrlich fell for a bait and switch. He allowed Simon to frame scarcity in terms of prices. It was a fateful mistake.

The switch from physical scarcity to prices is one of economists’ favorite tricks for dispelling concerns about sustainability. In this post, I’ll show you how to avoid getting hoodwinked. The key is to realize that resources can get cheaper at the same time that they get less affordable. And when it comes to the price of oil, I think this is exactly what’s in store.

Hotelling’s ‘rule’

We can’t talk about the price of non-renewable resources without discussing Hotelling’s rule. Like all ‘rules’ in economics, it’s not an actual rule (i.e law of nature). It’s just a hypothesis. But it’s a hypothesis that dominates how economists think about the price of scarce resources. Hotelling’s ‘rule’ was outlined by Harold Hotelling in a 1931 paper called ‘The Economics of Exhaustible Resources’. In a nutshell, Hotelling argued that the price of a non-renewable resource should grow exponentially with time. Here’s his reasoning.

Imagine two people, Alice and Bob. Both own a stock of 100 barrels of oil. Alice sells her stock today for $50 per barrel, earning $5000. Like a good capitalist, Alice puts the money in the bank and lets it collect interest. Suppose she earns a hefty 10% annual return. After 10 years, her oil money has grown to about $13,000.

Back to Bob. Unlike Alice, Bob sat on his oil stock, waiting for the right time to sell. After 10 years, he’s finally ready. He calls Alice and finds out she’s got $13,000 in the bank from her 100 barrels of oil. Bob does some math and realizes that to match Alice’s earnings, he has to sell his oil for $130 per barrel (almost triple Alice’s price). Not wanting to lose money relative to Alice, that’s the price Bob asks. And damned if he doesn’t get it!

If everyone behaves like Alice and Bob (as rational money maximizers), the price of oil will grow exponentially at the rate of interest. That’s Hotelling’s ‘rule’ (hypothesis). More generally, Hotelling’s ‘rule’ predicts that the price of any non-renewable resource should grow exponentially with time.1

The bait and switch

When it comes to resource exhaustion, Hotelling’s ‘rule’ is the bait — an idea that is simple and plausible. The switch comes when we actually test Hotelling’s ‘rule’. Suppose we find that the price of a non-renewable resource does not grow exponentially. That would seem to falsify Hotelling’s ‘rule’. But that’s not how economists see it. Instead, they argue that since the price is not growing exponentially, the non-renewable resource is in fact not being exhausted.

As Exhibit A for this logic, take the inflation-adjusted price of oil. Figure 1 shows the trend in this price over the last 160 years. Actually, ‘trend’ is the wrong word because … there isn’t one. Yes, oil prices have oscillated dramatically. But there is no sign of a long-term trend. Today, the price of oil is close to $40 — almost exactly its historical average (in ‘2020 $US’).2

Figure 1: The inflation-adjusted price of oil. The blue curve shows the annual price of oil in ‘2020 $US’. The red curve shows monthly data in 2020. Over the last 160 years, the average inflation-adjusted price was $38 per barrel. That’s roughly what oil costs today. [Sources and methods].

Since the inflation-adjusted price of oil has not grown exponentially, it appears that Hotelling’s ‘rule’ is wrong. There’s no shame in that. When tested, most scientific hypotheses turn out to be wrong. But here’s the shameful part. Rather than admit that Hotelling’s ‘rule’ is wrong, some economists claim that this oil-price data shows something completely different. It indicates, they argue, that we’re not exhausting our oil reserves.

It’s a trick that fools many people. Even Paul Ehrlich was hoodwinked. True, Ehrlich wasn’t tricked into thinking that non-renewable resources are not being exhausted. But he was goaded into a bet where resource scarcity was measured using prices. Fortunately, we can learn from Ehrlich’s mistake. As we exhaust non-renewable resources, Hotelling’s ‘rule’ claims that their price should grow exponentially. It’s an idea that is simple, plausible, and false.

The power to consume

If Ehrlich had wagered on a physical measure of resource scarcity, he would have won his bet with Simon. But at least to me, this hindsight is little consolation. Simon and Ehrlich bet on prices for a good reason. Prices dominate our lives. So it’s natural to want to connect prices to resources scarcity.

Having chastised Ehrlich for betting on prices, I’ll now argue that prices do connect to how we harvest resources … just not the way Ehrlich thought. What was missing in the Simon-Ehrlich bet was income. When it comes to consuming a resource, what matters is not the price itself, but how much of the resource we can afford to buy.

Wait, you say. Aren’t ‘price’ and ‘affordability’ two sides of the same coin? If the price of oil drops, doesn’t oil also become more affordable? The answer is yes … in the short term. That’s because over a short period (a few months), your income will probably stay the same. So when the price of oil drops, you can afford to buy more oil.

Over the long term, however, your income changes. And that means prices are not the same thing as affordability. Prices can go up at the same time that resources become more affordable. And prices can go down at the same time that resources become less affordable. What matters is not prices themselves, but how they relate to income.

We can measure affordability by comparing your income to a commodity’s price. I’ll call this ratio ‘purchasing power’:

\displaystyle\text{purchasing power} = \frac{\text{your income}}{\text{commodity price}}

Purchasing power measures your ability to consume a commodity. The larger your purchasing power, the more of the commodity you can consume. What’s important is that purchasing power is affected by both the commodity price and your income. When your income changes, the commodity price on its own says little about affordability.

With purchasing power in hand, let’s return to the price of oil. As Figure 1 showed, there is no clear trend in the inflation-adjusted oil price. But what about the affordability of oil?

To measure affordability, we need to compare the price of oil to someone’s income. Let’s use Americans as our guinea pigs. We’ll compare the price of oil to the average American income (measured by GDP per capita). I call the result ‘US oil purchasing power’. It measures the average American’s ability to purchase oil:

\displaystyle\text{US oil purchasing power} = \frac{\text{US GDP per capita}}{\text{price of oil}}

Figure 2 shows the history of US oil purchasing power. Unlike inflation-adjusted oil prices (which have no clear trend), oil purchasing power trended upwards. Actually, that’s an understatement. From the 1860s to the 1960s, US oil purchasing power grew by a factor of 40. (Note that in Figure 2, the vertical axis uses a log scale, so exponential growth appears as a straight line.)

Figure 2: The oil purchasing power of the average American. I’ve indexed oil purchasing power so that it equals 1 in 1863. Note that the vertical axis uses a log scale, so exponential growth appears as a straight line. [Sources and methods].

What’s interesting, in Figure 2, is that the trend in purchasing power is visible only over long stretches of time. That’s because over the short term, oil prices fluctuate wildly, trumping changes in income. Even over a decade (the length of the Simon-Ehrlich wager), oil-price changes trump income changes. The long-term trend in purchasing power becomes visible only when you look at century-long time scales.

Speaking of century-long trends, let’s look at the big picture in Figure 2. It’s clear that something changed around 1970. In the century prior to 1970, US oil purchasing power grew steadily. But in the half century after 1970, oil purchasing power stagnated. And if the smoothed trend in Figure 2 is any indication, US oil purchasing power is now declining.

What explains this long-term trend in oil purchasing power? It turns out that the answer is simple. Oil purchasing power grows in lock step with oil-and-gas productivity.

Purchasing power and productivity

When oil purchasing power increases, we can afford to consume more oil. But how do we make this happen? How do we make oil more affordable?

To frame the question, think about it this way. When you buy crude oil, your money doesn’t go to the dead dinosaurs who made it. No, your money goes to the (living) humans who harvested the oil. This is a banal but important observation. It means that there are only two ways to make oil more affordable:

  1. Decrease the relative pay of the people who harvest oil
  2. Decrease the number of people needed to harvest the oil

While both options are important, there are limits to the first one. You can lower relative pay only so much before people revolt. Imagine, for instance, trying to halve the pay of every oil worker. I grew up in oil country (Alberta), and I can tell you that this policy wouldn’t fly.

Now imagine the second option — halving the number of people needed to extract a barrel of oil. At first, this seems just as brutal as halving pay. Won’t 50% of oil workers lose their jobs? The answer is yes … but only if oil consumption remains constant. The thing about consumption, however, is that it almost never remains constant in the face of rising productivity. Instead, when productivity grows, consumption also grows. So imagine that as we halve the number of workers needed to produce a barrel of oil, we also double our oil consumption. In this scenario, every oil worker would keep their job. It’s a win for oil workers and a win for society. (It’s a loss for the Earth’s climate… but we’ll ignore that.)

When it comes to making oil more affordable, increasing oil productivity is the path of least resistance. With this in mind, let’s have a look at US oil-and-gas productivity. Figure 3 shows how it’s changed over the last 160 years. I’ve plotted here the energy output per worker in the US oil-and-gas sector. From 1860 to 1970, this output grew by a factor of 50. In other words, 50 times fewer workers were needed to harvest the same amount of oil. That’s a spectacular change.

Figure 3: Energy output per worker in the US oil-and-gas sector. [Sources and methods].

Now things are starting to make sense. Over the last century and a half, oil grew steadily more affordable for Americans (Figure 2). At the same time, US oil-and-gas productivity rose steadily (Figure 3). It doesn’t take a genius to connect the trends. It seems that productivity is the primary driver of affordability.

Figure 4 puts it all together. Here I compare the growth of US oil-and-gas productivity to the growth of US oil purchasing power. I’ve plotted both series on the same scale and indexed them to equal 1 in 1863. As oil-and-gas productivity grows, oil purchasing power increases in lock step. In fact, it’s roughly a one-to-one relation.

Figure 4: The growth of US oil purchasing power and oil-and-gas productivity. [Sources and methods].

The connection between oil purchasing power and oil-and-gas productivity is easy to explain. Let’s break it down. (If you don’t like algebra, skip ahead.)

We’ll start with the price of oil. This price is the (gross) income that oil companies earn per barrel of oil:

\displaystyle\text{price of oil} = \text{income (of oil companies) per barrel of oil}

We’ll assume that this income gets paid to oil-and-gas workers. (We’ll ignore profit.) So the price of oil equals the income per oil-and-gas worker times the number of workers employed per barrel of oil:

\displaystyle\text{price of oil} = (\text{income per worker})  \times (\text{workers per barrel of oil})

Now let’s assume that oil-and-gas workers earn roughly the same income as everyone else. We’ll assume they earn GDP per capita. Replacing income per worker with GDP per capta, we get:

\displaystyle\text{price of oil} \approx (\text{GDP per capita})  \times (\text{workers per barrel of oil})

Now we move GDP per capita to the left side of the equation to get:

\displaystyle \frac{\text{price of oil}}{ \text{GDP per capita}} \approx   \text{workers per barrel of oil}

We’re almost there. We take the inverse of both sides to give:

\displaystyle \frac{ \text{GDP per capita}}{\text{price of oil}} \approx   \text{oil barrels per worker}

And there you have it. The left side of the above equation is oil purchasing power. The right side is oil productivity. Putting it all together, we have:

\displaystyle \text{oil purchasing power} \approx \text{oil productivity}

Now, this equation is not exact for a few reasons. First, oil and gas workers don’t earn exactly GDP per capita. Second, we haven’t accounted for profits that flow to oil company owners. And third, our empirical measure of productivity measures both oil and gas output. But we’ve compared this productivity to the price of oil only.3

Caveats aside, the growth of oil productivity explains most of the growth of oil purchasing power. And this fact brings us back to resource scarcity.

Enter resource scarcity

On the day we drilled the first well, we started to exhaust our supply of oil. A naive prediction would be that from this day forward, oil would become less affordable. That didn’t happen. Instead, oil got more affordable (until recently). Why?

As I’ve just shown (in Figure 4), oil got more affordable because oil productivity increased. And productivity increased despite the fact that we were exhausting our supply of oil. If we look at oil resources in isolation, this fact sounds counter intuitive. But what’s missing is that oil production depends jointly on oil resources and our technology. Better technology makes productivity grow, even as we deplete our energy reserves.

Figure 5 shows an example of this interplay. On the left is the Drake Well — the first productive US oil well. Drilled in 1859, it struck oil at a depth of 70 feet. Today, such a shallow strike is unheard of. Modern wells are often thousands of feet deep. But although the Drake oil was easy to get (by today’s standards), the technology of the day was crude. Most work was done by hand. So productivity was poor

Fast forward to the present. Today, we drill for oil in the most unlikely places — thousands of feet below ground that is itself thousands of feet under water. But while this oil is far more difficult to extract, operations like the Troll A platform (Figure 5, right) are orders of magnitude more productive than the Drake well. That’s because they use far better technology.

Figure 5: Drilling for oil and gas, then and now. On the left is the Drake Well, drilled in 1859. It was the first productive oil well in the US. [Source: AOGHS]. On the right is the Troll A structure (circa 1996), a natural gas platform off the coast of Norway. It’s the tallest structure ever moved by humanity. [Source: datis-inc.com].

Looking at this growth of technology, Julian Simon claimed that it would trump resource scarcity. And in a certain sense, he was right. That’s how it’s worked in the past. But that’s not how it will work forever. The problem comes down to basic thermodynamics. Technology isn’t powered by human ingenuity (as Simon claimed). Technology is powered by energy. Think of technology as a tool for creating a positive feedback loop. It allows us to use energy to harvest energy. We harvest fossil fuels and then feed this fuel into technology that harvests still more fossil fuels. The result is that productivity grows exponentially.

Unfortunately, this feedback only works if we can perpetually feed our technology more energy. That means technology can’t save us from resource exhaustion. The endgame (for oil) happens when there’s no oil left to harvest. At that point, the fact that we have marvellous oil-extracting technology is moot. But the problem starts long before we run out of oil. As we exhaust the easy-to-get reserves, we move on to the harder ones. Yes, our technology improves. But at some point, the oil becomes so hard to find and extract that this difficulty trumps technology. (Think drilling in 2 km of water.) When this turning point happens, oil productivity stops growing and begins to decline.

Looking at Figure 3, we can see that this productivity peak has already happened. In the US, it came in 1970. Since then, US oil-and-gas productivity has plateaued. Of course, it’s possible that we’re just in the midst of lull, and that the exponential growth of oil-and-gas productivity will soon continue. But I’m not betting on it.

The problem is simple — we’ve already passed the peak of conventional oil production. As we exhaust this high-quality oil and move on to poor-quality stuff, I think oil-and-gas productivity will decrease. In response, oil purchasing power will also decline.

Basically, I’m guessing that the correlation shown in Figure 6 will continue to hold. In the past, oil productivity and oil purchasing power grew together. In the future, I predict that they will decline together. How quickly this will happen, however, is anyone’s guess.

Figure 6: US oil purchasing power vs. oil-and-gas productivity. [Sources and methods].

Back to prices

What’s interesting is that even if oil purchasing power does decline as I’ve predicted, this says nothing about prices. Oil prices could explode (as many peak-oil theorists expect). But oil prices could also collapse. It all depends on what income does. Let’s have a look at these opposite scenarios.

Scenario 1: Oil prices explode

In a future marked by oil scarcity, the price of oil explodes. It’s a future that many peak-oil theorists expect. It’s the future that Paul Ehrlich expected (for metals) when he bet Julian Simon. Here’s how it could happen.

Figure 7 shows a model of oil purchasing power in which the price of oil explodes. It’s a bit abstract, so let’s talk through the elements. I’ve plotted hypothetical growth rates for the price of oil and US nominal GDP per capita. A horizontal line indicates constant exponential growth. A positively sloped line indicates that growth is accelerating. In our model, income (nominal GDP per capita) grows at a constant rate. The growth rate of the price of oil, however, accelerates over time.

Figure 7: A model of oil purchasing power in which the price of oil explodes. I assume here that nominal GDP per capita grows constantly at roughly 4% per year (the average US growth rate over the last 160 years). The growth rate of oil prices accelerates with time. The result is that in the future, oil prices explode and oil gets increasingly unaffordable. [Sources and methods].

What’s most important, in Figure 7, are the shaded regions. They tell us whether oil is getting more affordable or less affordable. The red shaded region indicates that oil is getting more affordable. That’s because income (GDP per capita) is growing faster than the price of oil. So oil purchasing power increases. The blue shaded region, in contrast, indicates that oil is getting less affordable. That’s because income grows more slowly than the price of oil. So oil purchasing power decreases.

Although idealized, this model is based in part on real facts. Since 1860, US nominal GDP per capita has grown, on average, by about 4% per year. And I’ve chosen the oil-price dynamics to roughly reproduce the growth (and plateau) of US oil purchasing power shown in Figure 2. That said, this model is meant as a scenario for the future.

Let’s make this future concrete. In it, your income grows year by year. But although you have more money, oil becomes less affordable. That’s because the price of oil grows faster than your income. And so your oil purchasing power declines continuously.

Let’s turn now to the actual price of oil. Assuming our model holds, Figure 8 shows the projected oil price. It’s an explosion worthy of Hotelling’s ‘rule’. By 2100, a barrel of oil will cost more than $10,000.

Figure 8: A future where the price of oil explodes. I predict future oil prices here using the model in Figure 7. [Sources and methods].

I confess that this price explosion is what I expected when, in 2012, I bought oil futures. ‘We’re headed for an oil-scarce future,’ I thought. ‘The price of oil has nowhere to go but up. That’s a chance to make money!’

It was my Paul Ehrlich moment. Soon after I bought oil futures, the price of oil tanked. Luckily, I didn’t have much money in the game, so I had little to lose. Still, the principle irks me. Like Ehrlich, I thought that the price of a depleting resource would go up. I was wrong. And now I know why. If current trends are any indication, the price of oil will never explode (like in Figure 8). Instead, oil will get cheaper.

Scenario 2: Oil prices collapse

Scenario 1 imagines a Hotelling-like explosion of the price of oil. Assuming that oil production declines (as peak-oil theories predict), this price explosion is intuitive. That’s because almost everyone equates affordability with low prices. If a resource gets less affordable, we assume it’s because the price went up. Almost no one thinks of the alternative — that a resource could get less affordable because your income goes down.

We don’t think about this alternative because it involves something that few living people have experienced: the continuous contraction of income. Think about it this way. Most people are used to the annual ritual of asking for a raise. You may not get the raise, but no one (not you, not your boss) is surprised that you asked for one. That’s because for the last two centuries, growing incomes have been the norm. So asking for an annual raise has become a custom.

Now imagine an alternative reality. In it, asking for a raise is unthinkable. Instead, each year you beg your boss not to lower your income. Most years you’re unsuccessful. And so year after year, your income declines. The price of oil declines too, but not enough to offset your losses. And so oil gets cheaper, yet is increasingly unaffordable.

This alternative reality sounds like dystopian fiction. Yet if current trends are any indication, it’s the future we have in store. To see this fact, look at Figure 9. As with Figure 7, Figure 9 plots the growth rates of income (nominal GDP per capita) and the price of oil. The difference, though, is that Figure 9 shows real-world trends. I’ve plotted here the smoothed historical growth rates of US nominal GDP per capita and the price of oil. (Dashed lines extrapolate the recent trend into the future.)

Figure 9: Oil purchasing power in the real world … and projected future. Solid lines represent real-world trends for the growth of US nominal GDP per capita and the nominal price of oil. I’ve smoothed the data to more clearly show the long-term trend. Dashed lines continue the recent trend into the future. [Sources and methods].

Let’s look first at the growth of income (nominal GDP per capita). Other than a brief period in the 1860s, Americans’ average income rose consistently for the last 150 years. We know this because the growth rate of nominal GDP per capita was positive. Note, however, that this growth rate wasn’t constant. From 1860 to 1960, the growth rate of nominal GDP per capita accelerated. But starting in the 1970s, the trend reversed. Today, income growth rates are declining. If the trend continues, Americans are headed for a future in which incomes collapse. Every year, people will ask their boss not to lower their wage. Most years they’ll fail. And so incomes will decline.

With this dreary future in mind, let’s talk oil prices (again looking at Figure 9). Like income, the price of oil did not grow constantly. Instead, its growth tended to accelerate. But until the 1960s, incomes grew faster than the price of oil. So oil got more affordable. That changed during the oil crises of the 1970s. Oil prices exploded, while the growth of income slowed. As a result, oil got less affordable.

Today, the oil-price growth rate is headed south. If the trend continues, the price of oil isn’t going to explode, as many peak-oil theorists expect. It’s going to collapse. Figure 10 shows the prediction. In this future, the price of oil never gets above $120. And by 2100, oil won’t be $10,000 per barrel (as in Scenario 1). Instead, oil will be $5 a barrel.

Figure 10: A future where the price of oil collapses. I predict future oil prices here using the model in Figure 9. [Sources and methods].

At first glance, this future looks rosy. We’re headed for a world filled with cheap oil! (Never mind about climate change.) But in reality, Figure 10 paints a dystopian future. Yes, oil gets cheaper. But it also becomes less affordable. Why? Because incomes collapse faster than the price of oil. Every year, oil is cheaper. But every year you have less money. And so every year, you can afford less oil.

Ehrlich vs. Tverberg

I’ll close by returning to where I started: the Simon-Ehrlich wager. What’s important about this wager is that it conforms to our expectations about prices. Ehrlich bet money on the idea that resource scarcity will cause prices to rise. It’s an idea that most people find intuitive. Simon bet money on an equally intuitive idea — that resource abundance will cause prices to fall.

Looking at the bet, you can see that it’s really about two distinct hypotheses. The first hypothesis is that we’re exhausting our natural resources. The second hypothesis is that prices will rise in response. What’s interesting is that most of the discussion about the Simon-Ehrlich wager conflates the two hypotheses. Because Ehrlich lost the bet, people assume that resource scarcity is not a problem. But that’s faulty logic. What’s also possible (and what all the evidence points towards), is that the price hypothesis is wrong. As we exhaust natural resources, their price does not explode. Instead, it collapses.

Even though Ehrlich lost his bet, his thinking remains widespread. Just look at peak-oil theory. Many peak-oil theorists think that as oil production declines, the price of oil will explode. But not everyone is convinced. The notable exception is the analyst Gail Tverberg. For years, Tverberg has been arguing that we’re headed for lower oil prices. (Here’s a thread of her writing on deflation.) But she doesn’t think prices will fall because of resource abundance. She’s a Malthusian much like Paul Ehrlich. Instead, Tverberg thinks we’re headed for a world where oil is scarce yet cheap.

To many people, such a future makes little sense. But that’s because we can’t imagine a world in which incomes collapse. But Tverberg can. And so I propose a hypothetical bet for the future: Ehrlich vs. Tverberg. Both scientists assume that oil will get more scarce. But in the Ehrlich scenario, oil prices explode. In the Tverberg scenario, oil prices collapse.

I once thought that the Ehrlich scenario was all but guaranteed. But today, my money’s on Tverberg. In the future, oil will be scarce and unaffordable. But I think it will also be cheap.

Support this blog

Economics from the Top Down is where I share my ideas for how to create a better economics. If you liked this post, consider becoming a patron. You’ll help me continue my research, and continue to share it with readers like you.

patron_button

Stay updated

Sign up to get email updates from this blog.

Email Address:

Keep me up to date

Sources and methods

Data for inflation-adjusted oil prices (Figure 1) is from:

In Figure 2, data for the nominal price of oil comes from:

Data for nominal GDP per capita comes from:

  • 1860–1947: Historical Statistics of the United States Millenial Edition, Table Ca12
  • 1947–present: FRED series A939RC0Q052SBEA

Data for oil-and-gas labor productivity (Figure 3) is from:

  • Oil-and-gas employment:
    • 1860–1928: Historical Statistics of the United States, Colonial Times to 1970, Table M5-6.
    • 1929–present: Bureau of Economic Analysis, Table 6.5A–D
  • Oil-and-gas energy production:
    • 1860–1948: Historical Statistics of the United States, Millennial Edition, Table Db155–163
    • 1949–Present: Energy Information Agency, Table 1.2

In Figure 9, I smooth GDP and oil-price growth rates using a LOESS regression.

Notes

  1. The reality is that Hotelling’s ‘rule’ says nothing about non-renewable resources. It is a model of private property. Because an owner can always sell their property and collect interest on their cash, a rational owner will not sell their property only if its future value increases at the rate of interest (or more). What Hotelling’s ‘rule’ really predicts is that the (nominal) price of all property should grow exponentially. Over the last century, it has. But this fact tells us nothing about the nature of non-renewable resources.↩
  2. You’ll notice that I use scare quotes around units of real currency — ‘2020 $US’. That’s because these units don’t exist. There are 2020 prices. But projecting these prices backwards in time creates many ambiguities that economists typically don’t acknowledge. For a discussion, see Real GDP: The flawed metric at the heart of macroeconomics.↩
  3. The problem with measuring oil productivity by itself (rather than together with gas) is that oil-and-gas employment are reported together. That’s because gas is often harvested from oil wells. It’s easy to separate the resulting production of energy (oil vs. gas) but difficult (probably, impossible) to separate the labor input.↩

Further reading

Hotelling, H. (1931). The economics of exhaustible resources. Journal of Political Economy, 39(2), 137–175.

Sabin, P. (2013). The bet: Paul Ehrlich, Julian Simon, and our gamble over Earth’s future. Yale University Press.

The deceitful image of money scarcity has no place in our society.

Hands of two women. A younger woman holding the hands of an elderly woman.Image by Sabine Van Erp from Pixabay

Society is indeed a contract. It is a partnership … not only between those who are living, but those who are dead and those who are to be born.

Edmund Burke
Reflections on the French Revolution – 1870

 

In the news this week the government did yet another turnabout following Marcus Rashford’s campaign and public pressure by announcing a £170m winter grant scheme to support low-income families. During the same week, the Trussell Trust reported that there had been a 47% rise in the number of parcels distributed via its networks in the 6 months to September 2020 compared to the previous year; that more than 1.2 million parcels were distributed, of which 470,000 were to children. And this was, it suggested, just the tip of the iceberg as these figures did not include the number of people helped by the numerous local organisations, independent food banks and local authorities who have stepped in to support their communities.

Whilst the Trust attributed some of these increases to the pandemic, which has had a devastating, effect the Trust was clear as to the underlying reasons why people need support. The key issues were related it said to a fundamental lack of income which has left people struggling to afford the essentials. As Emmie Revie, the Chief Executive of the Trust commented to the Guardian:

 

‘We have to find better ways of supporting one another as a society than leaving people to rely on food charity. It’s not just about ending food banks, it’s about finding an alternative to the need for mass distribution of charity food in the fifth wealthiest country in the world.’

 

The threads of poverty lie in adherence to a failed market-focused economic ideology and the government policies and spending decisions that result from it. Child hunger is just one of many interlinked consequences and research published this week by the Living Wage Foundation showed that during 2019/20 nearly three-quarters of independent care workers in England were paid less than the real living wage. They are, it said, among the 5.2 million workers in low paid, insecure jobs – 1.3 million of whom are key workers. The analysis noted that care workers earn an average of £8.50 per hour and 24% are on zero-hours contracts. It, like the Trussell Trust, highlighted the existing inequalities in our society which has hit the lowest earners the hardest and that was before the pandemic struck. Whilst the nation clapped with Boris Johnson Tabitha, a care worker, said:

 

‘I feel like a Roman Gladiator going into the ring on a night shift. Everyone is clapping for you, but you’re pitting yourself against a deadly disease without the proper pay and protection.’

 

As the government has increasingly ceded its responsibilities for its citizens through cuts to spending on public infrastructure both local and national, which in turn has led to an ideological and financial response at local level as private profit-seeking companies were invited to tender for contracts to deliver social care services, the consequences have been devastating. For those being cared for as much as those doing the vital work of caring for others.

As the government lauded its financial acumen in managing its accounts, its decisions have led to a vicious cycle of deprivation and poverty and public infrastructure decay. The connections are irrefutable. Surely it must dawn on the nation soon, as government and other institutions begin to wind up and reinforce the household budget narrative in support of action to get the public finances ‘back in order’ after all this spending, that its health and economic well-being is being reduced to one of balanced budgets and unaffordability.

At the same time as Rishi Sunak suggested that he may increase capital gains tax to pay for billions borrowed and the COVID-19 debt which is supposedly racking up, the Resolution Foundation published its report entitled Unhealthy Finances: How to support the economy today and repair the public finances tomorrow. In its report, it focused specifically on the dual challenge it believes the government is facing ‘to ensure that there is sufficient fiscal support through the crisis and recovery, and setting fiscal policy on a sustainable path.’

Even though it said that the government should commit not to start such consolidation until the economy had recovered, it still claimed that the government must do what is required to ensure that the public finances are sustainable and adopt a balanced current budget rule.

In the report, it suggested continuing to use low interest rates as a tool for supporting the economy and noted the fiscal damage being caused by lower tax receipts and higher spending. It proposed, amongst other things, reforming the tax system to raise revenue and imposing a health and social care levy to provide any additional revenue required.

Here we have all the usual implied but false language narratives about government spending – taxing to spend, borrowing and unsustainable public debt, repairing the public finances, financial sustainability, balanced budgets.

We’ve been here many times before and clearly the establishment is determined not to lose control of that narrative. The fightback is in full swing. The deficit and debt worrywarts are working overtime to keep the public in line. Heaven forbid that people should learn the truth about how the government really spends!

However, as that knowledge is going more mainstream, questions are being asked as it is becoming ever clearer that human and planetary well-being lies with government choice, i.e. who gets the money. It is therefore intolerable to think, in the light of this growing understanding of the spending capacity of government, that government and think tanks are suggesting that taxes be increased to pay for this imaginary round of borrowing and the subsequent imaginary national debt which has arisen from it.

Whilst one could certainly make a case for reforming the whole tax system to ensure a fairer distribution of wealth, the justification by the Resolution Foundation or the Chancellor for increasing taxes does not stack up for the following two reasons:

  • That such action would quite simply take money out of the economy at a time when it would still be recovering from the economic effects of the current crisis compounded by previous cuts to public spending which have had a cumulative effect on the economy. Private debt levels prior to the pandemic were already high but have soared by 66% since May to £10.3bn. The number of people in serious debt has doubled since March rising to 1.2m with a further 3 million at risk of falling into arrears. Raising taxes with this scenario in mind would seem self-defeating and destructive.
  • That taxes don’t fund government spending and cannot be used to reduce public debt

Quite simply the ‘taxes fund spending’ story is just a lot of accounting smoke and mirrors to suit an agenda which aims to keep people downtrodden and accepting their fate. The trope of financial unaffordability is deep within our own household budget psyches and shifting such narratives can be hard work. Much depends on loosening our attachment to them through knowledge and more importantly the desire for something better.

Taxes will not pay off the national debt any more than they will boost financially a failing social care system. Our public services including social care and the NHS are in crisis as a result of government choices, not a lack of tax money the government can collect. Privatisation and the profit motive, along with public spending cuts have both played a role in destroying what could and should have been funded publicly.

The deceitful image of money scarcity, with government reliant on taxes and borrowing to spend, has no place in our society as children go unnecessarily hungry and our young people face a gloomy future.

In this week’s Guardian, Patrick Collinson wrote that the COVID-19 crisis could have a lasting impact on young people’s pensions: indeed, the lives of young people have been turned upside down with future employment prospects damaged and life opportunities curtailed. However, future private pensions are the least of the worries of young people as they start out in life. It is the government’s role now to ensure, through its policies, that young people can thrive and build themselves a future and decent state pension provision should be a significant part of that.

We need to expose the con of private pensions which are reliant on fickle markets and a stable economy. Margaret Thatcher’s economic vision which was inspired by Friedrich Hayek and Milton Friedman reflected her belief in the superiority of the market. The idea implicit in this dogma was that the welfare state deprived people of the opportunity to make their own provision for old age. Thus, we witnessed the opening up of the market for private pensions in an attempt to weaken the state’s own pension provision. The current crisis is exposing their weakness which even before the pandemic was becoming clear and invites us to question the state’s ideological reliance on the private pension sector.

The solutions are to provide decent state pensions to give retired people financial security and ensure a decent standard of living and to reduce the pension age. The current round of retirement age increases is based on the lie that state pensions will become increasingly unaffordable as the birth rate falls and tax take reduces which will, it is claimed, cause an unacceptable burden on future taxpayers. We need to break the false connection between the payment of tax and receiving a pension.

The question of how it can be paid for doesn’t arise if we understand how the government spends. It would be paid for in the same way the government always pays for things; by creating the money out of nowhere. A simple transfer with a few computer keystrokes authorised by the Treasury and carried out by the Central Bank. We need to knock on the head the idea that a portion of our tax is being collected somewhere in a savings pot to be divvied out at retirement or indeed that taxes serve to pay for public services.

Assuming that government has invested through sufficient spending on public and social infrastructure including education, training, new technologies and by embracing full employment policies, then an earlier retirement and a good state pension is possible. Such investment will not put a financial burden on the lives of future generations, it will enhance them and those of retirees. We just have to decide as a nation how we want the real, but finite, resources we have at our disposal to create a better life to be shared out.

Earlier this week, there was an interesting exchange of views on a Facebook labour group when someone posted the following:

 

‘You do realise the Bank of England are printing new notes by the million. All we will get from Johnson is a bankrupt country.’

 

It is disappointing to observe in that post and the thread that followed that some individuals on the left seem to get pleasure from continually shooting themselves in the foot. By excoriating what they see as a reckless Tory government because in their view it is spending too much and driving us towards bankruptcy or hyperinflation, adds a certain touch of irony to the criticism since it was the Conservatives who used similar arguments against Labour’s spending plans in the last election. It all boils down to where is the money going to come from and who is going to pay?!

This type of scaremongering is damaging to a left-wing agenda and is borne of lack of knowledge. It is regrettable that, even when presented with the facts about how the government spends, many still choose to persist in reinforcing the myths and ignore the fact that it is impossible for the UK government to go bankrupt or run out of money and that government does not need our taxes to pay for public services not even the taxes of the rich. Ultimately, that is to deny monetary reality and what that knowledge could mean for any future progressive government’s spending plans if such a government were to exist although that is currently quite another story!

If we are going to debate, let’s do it from a position of knowledge rather than making exaggerated and untrue statements about how we are all going to hell in a hand cart because the Tories have spent too much. Let’s remember that in 2010 the same arguments were being levelled at Labour following the Global Financial Crash; another era of suffering and hardship for many. And that it was the Tories who said in 2010 that there was no alternative to cuts to public spending to get the public finances back in order and yet have suddenly without a problem found the monetary wherewithal to save the economy from the worst effects of the crisis. For the left to use this argument against the Conservatives is counter-productive to future progressive agendas.

The household budget ping pong played by successive governments at election time, which examines critically the fiscal record of a government or asks how its spending plans will be paid for, has done great damage to society and the economy. Such arguments overlook the real measures of economic health relating to how government serves its citizens in real social improvements from the provision of health and social care, education, policing, a social safety net and public transport networks to local service provision of libraries, municipal parks and refuse collection. All these things which previously were determined as the public good have been attacked by a governing elite serving its own interests and those of its financial donors.

As Peter Fleming, professor of business and society at City, University of London wrote:  Austerity [redefined] these things as fiscal liabilities or deficits rather than shared investments in common decency’.

Let’s argue for a better, fairer and kinder society based on real knowledge of monetary reality and not baseless statements which only serve to promote continuing political inertia on the left in terms of understanding how money works and how that knowledge fundamentally changes how we can respond to today’s and tomorrow’s challenges.

 

Newly published

This week, we published a fact sheet on Negative Interest Rates

Negative Interest Rates

and a new paper by Phil Armstrong and Warren Mosler

Weimar Republic Hyperinflation through a Modern Monetary Theory Lens

 

Join our mailing list

If you would like GIMMS to let you know about news and events, please click to sign up here

Support us

The Gower Initiative for Money Studies is run by volunteers and relies on donations to continue its work. If you would like to donate, please see our donations page here
 

Share

Tweet

Whatsapp

Messenger

Share

Email

reddit

Pinterest

tumblr

Viber icon
Viber

The post The deceitful image of money scarcity has no place in our society. appeared first on The Gower Initiative for Modern Money Studies.

Pages