# Eurozone recessions, a historical perspective

Special

**Against a backdrop of increasing talk about a new Eurozone recession, we first take a deep dive into the historical data****We show that since the 1960s there have basically been two types of recessions: i) common Eurozone recessions and ii) idiosyncratic (country specific) recessions****Time-sequencing between European recessions is found to be weak****But short-term projections for GDP (based on monthly indicators such as retail sales, industrial production and economic confidence) point to continued weakness in Germany****For now (with the emphasis on the latter), the evidence suggests that we are still looking mostly at the second type of recession (in Germany) rather than the first type**

*This piece is the first in a series, with the next publication looking at how we gauge the current and future risk of a recession, bearing in mind the historical evidence for Eurozone member states*

## Introduction

Since the summer months there has been increasing talk about the possibility of a new upcoming Eurozone recession. Especially for Germany this appears to be quite acute, as GDP already contracted in Q2 (-0.1%) and most data are not pointing towards any improvement in the coming months. Meanwhile (as we will also ‘confirm’ below), Italy already experienced a recession last year. However, disregarding the probability of a future recession in the Eurozone for a moment (we actually believe its likelihood is quite high), we first take a deep dive into the historical data.

The aim of this piece is to get a better understanding of the historical incidence of recessions in the Eurozone, what their average duration is and whether there is a commonality (or even some form of sequencing) between member states. Getting a better understanding of these issues will also allow us to put recent developments (i.e., the concerns about Germany) into a broader perspective. To that end we also develop a series of monthly GDP nowcast data and near-term recession indicator estimates. These will serve as the stepping stone for a next piece in which we explore the future likelihood of recession(s). But before we dive into the data, let’s first take a step back: why are we actually looking at recessions and how do we define/establish them?

### Why ‘recession risk’ is hot

Through the ebb and flow of economic activity, recessions always have had a special meaning as they tend to have a significant impact on revenues and profits of non-financial corporations (ultimately leading to defaults, for example) and the financial sector. But most of all are they worth our special attention because recessions tend to cause economic hardship for households, in the form of rising unemployment and slowing real wages.

The last two recessions in the Eurozone, the one following the Global Financial Crisis in 2008 and the one following the sovereign debt crisis in 2011, both serve as a case in point.

And this time around, assessing recession risk may even be more relevant than ever. First of all, several Eurozone member states have hardly recovered from the previous two recessions (figure 1) and even though unemployment is now almost back to its pre-recession low of around 7.5%, the EU’s image among many citizens is still tarnished (figure 2). This is especially relevant in the context of rising populism and nationalism in the member states. As we wrote in a previous publication, political divisions on a national level may hamper stable government formation and policy making. At the European level – bearing in mind that anti-EU parties took up almost a third of the popular vote in May – this may hold back the reform pace whilst a flaring up of tensions between ‘Brussels’ and the member states remains a key risk. Although a discussion about fresh budgetary stimulus appears to be gaining some pace, the risk is very real that European politicians will keep squabbling, implying that any joint fiscal response will be too late or may not come at all. This brings us to the second reason why we should fear a recession even more than in the past. Whilst the ECB has gone to great lengths to fight the previous two slumps, taking excess liquidity to unprecedented highs, taking its deposit rate into negative territory and taking out almost 33% of the sovereign/sub-sovereign debt stock in some countries, its future options are likely to be much more limited, a point that was very much demonstrated this week.

## How do we define and establish a recession?

So, what establishes a recession? There are several viewpoints here. Some would probably argue that a sharp slowdown in growth can already be called a recession. After all, business gets tougher. However, we adopt a narrower definition, which is a contraction in overall output (GDP) for a certain period of time. This is also a definition that is relatively measurable and fits with the ‘hardship’ impact of a recession. Two consecutive quarters of negative growth (so an actual fall in GDP) is often used as a rule-of-thumb, but this is not the undisputed standard. Indeed, there are several institutes around the world that establish recessions using a more rigorous approach. For example by looking at a broad set of data and/or adjusting data for special factors (such as a general strike). These institute’s findings are often used as a benchmark. Well-known in this respect are the US National Bureau of Economic Research (NBER) and the Centre for Economic Policy Research (CEPR). The OECD also has a set of indicators that establishes the peaks and troughs in the business cycle. Unfortunately, for many Eurozone member states such data are not available. Moreover, given the various definitions that are out there, a comparative analysis would be more complicated.

Therefore, we chose the *Bry-Boschan* algorithm to establish recessions in the Eurozone member states as well as for the Eurozone as a whole. In a nutshell, the *BBQ* algorithm, as it is also known, is a set of rules that looks for the turning points in the cycle. It was originally developed to mimic the NBER’s approach in establishing recessions, but in a way it is just a more sophisticated version of the ‘two consecutive quarters of negative growth’ rule of thumb. Nevertheless, this approach has several advantages:

- It is transparent through a fixed set of rules
- It is timely; whenever a new quarterly GDP number comes out, we can run the algo
- It is consistent across countries, allowing comparisons

A more detailed description of the *Bry-Boschan* algorithm can be found in appendix A at the end of this publication. We have chosen gross domestic product (GDP) to be our main series of interest because it is the broadest measure of economic activity. Additionally – and also in the interest of timeliness – we want to establish the business cycle turning points at a *monthly* rather than a quarterly frequency. The BBQ algorithm can be adjusted to do just that. However, that also requires monthly time series of GDP.

In order to achieve this, we estimated a model for each member state’s GDP. By using information at the higher frequency interval we can make an estimate of monthly GDP. In particular, we use data for retail sales and industrial production. Both indicators are available at a monthly frequency, are relatively timely and are strongly correlated with GDP. To estimate the models, we used the *Kalman filter* technique. We explain our approach in more detail in appendix B at the end of this research note. Because these monthly GDP estimates tend to be a bit more volatile, we use a 3-month moving average of monthly GDP for the analysis below. Our data are entirely consistent with the GDP numbers at the quarterly frequency (in other words, our GDP estimates for each month exactly equal the quarterly total as published by Eurostat). Another benefit of this approach is that we can immediately use the model to *nowcast* or even forecast GDP, as soon as fresh monthly data become available. So, off we go!

## The BBQ algorithm in action

We are now ready to feed our monthly GDP dataset for 12 Euro area member states and the EA19 aggregate into the algorithm[1]. When looking at the Euro area as a whole, the algorithm finds four recessions since 1960: the first oil crisis impact (July 1974 to May 1975), the early nineties recession (February 1992 to February 1993), the global financial crisis (March 2008 to June 2009) and the sovereign debt crisis (April 2011 to February 2013).

Some general descriptive statistics stemming from the algorithm are shown in the table below. Most countries have experienced more recessions than the Eurozone as a whole. Italy and Greece are notorious. But Germany has also experienced more recessions than the Eurozone aggregate and even more than the simple average across member states (which is shown in the last column).

The average losses during recessions are remarkably consistent (with Greece perhaps being the exception). Durations are slightly more dispersed, but, historically speaking, recessions have taken some 9-18 months (and about a year on average). Of the bigger member states, France shows remarkably few recessions with contractions also being relatively short. As a result, the total output loss during recessions in the past 58 years is only around 13%. Germany, Italy, Austria and Belgium also show relatively modest cumulative output losses due to recessions, whilst the Netherlands, Spain, Portugal and Luxembourg show somewhat higher cumulative losses. But Greece[2], Ireland and Finland really stand out.

### How well does the BBQ algorithm mimic CEPR recessions?

Before we take a closer look at the commonality of European recessions, we first ask whether our method actually corresponds with an alternative and independent source. We therefore compare our results with those of the CEPR, who establish recessions in their Euro Area Business Cycle Dating Committee (EABCDC), but (unfortunately) only at the aggregate Eurozone level.

Figure 3 shows the CEPR recessions alongside our BBQ recession indicator for the Eurozone as a whole. Because the CEPR dating is only available on a quarterly basis, we have assumed (somewhat arbitrarily) that the CEPR turning points start and end at the beginning of each quarter. In addition, we also calculated a GDP-weighted average of our recession indicator for the 12 individual member states analysed.

The algorithm picks up four of the five CEPR recessions (with only slight differences in phase[3]). The main exception is the downturn in the early 1980s (following the second oil crisis). However, the weighted average of our recession indicator does highlight that this 1980s recession is in fact a plausible candidate for being seen as a proper recession, with an average reading of between 0.4 and 0.6 during that period. Another potential candidate seems to be the post-internet-crash downturn of 2001-2004, but this is less conclusive.

Altogether, then, we conclude that the BBQ algorithm serves as a pretty good method to establish recessions.

### And what about commonality?

The second question is to what extent recessions in the Eurozone ‘move together’ (which would imply that they are driven by a common shock). To some extent, the figure above already highlights that aggregate Eurozone recessions, or a high share (>50%) of member states being in recession at the same time, is probably a good indicator of such commonality.

We can formalize this in the form of a statistic, which counts how often expansions (0) and recessions (1) take place at the same time. The ‘concordance’ statistic, which is taken from Harding and Pagan (2002) does just that[4]. The number represents the share of time in which two countries or regions are in the same phase of the business cycle. This statistic is shown in figure 4 below. In a way, we can interpret this as an ‘R-squared’ measure; the higher the share the more both countries’ business cycles move in the same rhythm.

A glance at this statistic immediately shows that in almost all cases, the concordance between the business cycle indicator for the single member state and the Eurozone (the first line in figure 4) is higher than of any concordance between single member states. However, because recessions are a relatively rare phenomenon (the average duration of an expansion is about five to ten times as long as the duration of a recession), one would expect this statistic to be biased to 1[5].

So, to get an even more granular picture, we can also look at the business cycle timeline for all the member states as well as the Eurozone aggregate in one chart. This is shown in figure 5 below. On visual inspection, we can probably conclude that, if we removed the time periods in which there was a Eurozone recession, we are likely to end up with idiosyncratic recessions. This is what we have done in figure 6 on the following page. More specifically we have removed all instances where there was a CEPR recession (so we earmarked the 1980s recession as a common Eurozone recession).

To take account of possible time lags (one country being hit earlier than others), we have allowed for a 6-month margin on both sides of the CEPR-dated recessions.

Again, on visual inspection, it would seem that all the remaining (idiosyncratic) recessions are pretty much scattered, without any clear pattern of time sequencing. Italy – according to the algorithm – was the only Eurozone member state that experienced a recession in 2018. And, remarkably so, there is also little evidence of concordance and sequencing effects between countries that traditionally have had strong economic ties or have behaved as economic blocs (say, Germany – Netherlands, or Southern Europe).

However, to some extent, this may be the result of the order of countries that we chose for figures 5 and 6. So to test more formally whether some countries tend to be early movers or late movers, we have run a series of (pairwise) Granger causality tests[6]. The results of this exercise are shown in figure 7, where the sequencing (causality) is supposed to run from column to row.

The main result from these tests is that the smaller member states tend to have more ‘forecasting’ power than the big member states. Especially, Portugal, Ireland, Belgium and Luxembourg have a relatively high degree of ‘exogeneity’ (i.e. they have more impact on the other countries than other countries have on them), whereas this is the opposite for countries such as Germany, Finland, France and Austria. This is a somewhat surprising result. Also when we zoom into the significant pairs the results do not make a lot of sense (perhaps the impact from the Netherlands on Belgium is an exception)[7].

In figure 7, we have sorted the data from figure 5, with the least exogenous (Germany) to the most exogenous (Portugal). A glance at this figure does seem to confirm the sequencing derived from the Granger causality tests, as there is a slight tendency now for the blue bars to be tilted from the lower left corner to the upper right corner. Having said this, this is mostly visible for the common recessions.

Overall, then we believe we should take these results with a pinch of salt, especially because it is difficult to make any logical sense from this. Based on this analysis, the evidence for any specific sequencing between recessions in Eurozone member states is weak.

The corollary of that conclusion is that a recession in Germany need not automatically imply a recession in other member states or for the region as a whole.

## How are things looking right now (-cast)?

The aim of this piece was to get a better understanding of the historical incidence of recessions in the Eurozone and whether we can say something about their commonality. As it turns out, we can loosely speak of two types of recessions: i) *idiosyncratic* recessions (caused by country-specific factors) and *common* euro-zone recessions. Sequencing effects appear to be limited, if they exist at all.

Against the backdrop of recent disappointing data, notably in Germany, it is of course interesting to ask which of the two type of recessions is currently developing. Although we haven’t really discussed indicators that have a reliable forecasting power of recessions in the future, the methods we have used in this research note do allow us to take a peek into the (near-term) future.

After all, the relationship between GDP growth and monthly indicators have already been put together in our Kalman filter approach. To add a little bit of extra forecasting power to our model, we have also estimated twelve separate VAR models (6 lags, starting in 1980, monthly data) in order to make short-term projections of the those two key variables of interest: industrial production and retail sales. We have added monthly changes in consumer confidence and industrial confidence to those models (both from the EC survey).

In the next step we have made projections for each country for the two key variables and fed this into our monthly GDP model. This allows us to make projections one or two quarters into the future (although we would like to warn that the model is, economically speaking, very limited).

Bearing in mind this caveat, we can see in figure 8 below that the model clearly points to weaker (and negative) GDP growth in Germany for the remainder of 2019. Overall Eurozone GDP growth (in %q/q terms) is expected to slow a little bit further in 2019Q3 and to edge up ever so slightly in Q4. But Germany is the only country for which (even more) negative growth is projected in the near-term. This is, however, largely offset by stronger GDP growth in Italy, Spain, Belgium and France.

Hence, the upshot of these results is that the Eurozone does not appear to have entered a recession just yet and that outright falls in output are only projected for Germany (thus leading our BBQ algorithm to earmark this as an idiosyncratic recession for now).

That said, we cannot stress enough that the techniques employed in this research note are only applicable to assess developments in the near-term future. The future beyond this point will be the topic of our next note.

## Appendix A – The Bry-Boschan algorithm

To establish the cyclical turning points in the economy (with GDP being our main series of interest), we make use of the Bry-Boschan (1971) algorithm, which is also known as the BBQ rule.

This is a non-parametric and well-established approach to dating the business cycle. The procedure was invented at the NBER as a tool to automatically capture the phase of the business cycle.

The algorithm consists of a set of decision rules and works in two stages. First it selects the candidates for turning points and then it applies a censoring rule to eliminate the turns which do not satisfy some criteria (such as a minimum duration). The benefit from this set of rules is that it is simple, transparent and widely applicable[8].

A *peak* is established if the GDP level in that month is the maximum in a range of past and future observations (*t-k* to *t+k*) and a *trough* if it is minimum in that same range of observations. If neither condition is fulfilled it is neither a peak or a trough and the economy is just continuing its previous phase (which could be either recession or expansion).

At the second stage, it is tested whether the minimum phase duration is *n* months (we have set this to 6 months in our analysis). Moreover, a complete cycle (from peak to trough and back) should last at least *p* months (we have set this to 15 months).

Figure 9 shows the output from the BBQ rule using monthly GDP data for the Eurozone (sample January 1960 – September 2019). These monthly data are a 3-month moving average of the smoothed *Kalman* filter estimates (explained in more detail in Appendix B). We have used a logarithmic scale.

## Appendix B – Back-casting monthly GDP

To assist us with the establishment of monthly turning points of the business cycle (for example with the Bry-Boschan algorithm) and to get a better idea of where the economy is heading during a certain quarter, we estimate monthly GDP levels for the biggest 12 EMU member states as well as the EA19 aggregate. More granular data will also be helpful for future forecasting purposes.

### Indicator approach

Monthly indicators can tell us something about the development of economic activity *during* a certain quarter. That is our starting premise. We also would like our method to ensure that our monthly GDP estimates add up to the quarterly total. For the purpose of this piece we consider two monthly indicators:

- Industrial production (index, 2015 = 100)
- Retail sales volume (index, 2015 = 100)

We have selected these indicators because they are relatively timely available in most countries (and usually ahead of the official GDP estimate) and they have a strong correlation (on a quarterly basis) with GDP, as is illustrated in figure 10 and table 3. Also from a logical point of view one would expect these data to be more reliable indicators (than, say, producer confidence) because they actually enter GDP in a fairly linear and current fashion (i.e. part of GDP consists of consumption (~retail sales) and industrial output).

In most cases, we needed to link several data series together in order to have long consistent time series that start in 1960. A similar exercise was carried out for quarterly GDP for each member state. In some cases (notably in the earlier part of our sample) only annual GDP numbers were available. We have used a simple linear interpolation in these cases[9].

In all countries the correlation between quarterly GDP growth and industrial output growth is quite high; retail sales are in some cases less strongly correlated, particularly in Italy, Austria, Portugal, Ireland and Luxembourg. The monthly indicators are obviously more volatile on a month-on-month basis.

### The underlying model assumed

We follow the approach used by Issler and Notini (2016)[10]. We stipulate that monthly GDP can be approximated by some (linear) combination of the two selected indicators. However in contrast to the authors, we apply first differences to avoid biased estimates. Tests indicate that all the series employed in this analysis are integrated of order 1 (in other words, they contain a unit root). In particular, then, we assume the following model that links our indicator variables to (unobserved) monthly GDP. Essentially this is the AD(*p*,*q*) model (autoregressive distributed lag) advocated by Hendry (1995)[11].

Where *L* is the lag operator, the α, β and γ functions each have a certain lag length and the errors are assumed to be *IID* (Issler and Notini assume that there is one AR(1) term and no further lags, but their estimate is very close to 1, implying first-differencing is required).

We assume an additional AR(1) term in the differenced data, but we also include several lags for the indicator variables to ‘pick up’ more information’ during the quarter.

### Kalman filter estimates

Because monthly GDP is unobserved, we need a special technique: the *Kalman filter*. Without going into detail, one of the main benefits of the Kalman filter is that the user can introduce unobserved variables. Another benefit is that we can immediately use the model to *nowcast* or even forecast GDP (at least in the short run), assuming that monthly indicators (or forecasts thereof) are available as well. Finally, this approach also allows us to ensure that we have consistency between monthly and quarterly data.

We use natural logs to prevent heteroscedasticity. Moreover, coefficients can now be interpreted as elasticities. In this particular case, we make use of the fact that the natural log of quarterly GDP can be approximated by the three-month average of the natural log of monthly GDP + ln(3)[12]. We can write the model in the following state-space form:

#### Signal equation:

ln(GDPQ) =select*((sv1 +sv2 +sv3)/3 + ln(3))

Where select =1 in the last month of each quarter (March, June…) and 0 elsewhere

#### State vectors

sv1 = sv1(-1) + c(2)*(sv1(-1)-sv2(-1)) + c(3) + c(4)*Δln(IP(-2)) +c(5)* Δln (IP(-1)) + c(6)* Δln (IP) + c(7)* Δln (RS(-2) + c(8)* Δln (RS(-1) + c(9)* Δln (RS) + [var = exp(c(1))]

@state sv2 = sv1(-1)

@state sv3 = sv2(-1)

The first state vector basically says that unobserved monthly GDP equals last month’s GDP plus a constant plus an autoregressive coefficient c(2) on the previous month’s change in GDP and a combination of current and past monthly growth of industrial output and retail sales. We have assumed two lags for the independent variables. The second state vector is the 1-month lag of monthly GDP and the third state vector is the 2-month lag. GDPQ is observed quarterly GDP, available in the last month of each quarter and set to “NA” in the other months[13].

As said, the model can be estimated by the Kalman filter technique. The only thing we still need to do is feed the model with starting values for the state vectors (we approximated this with 1/3 of quarterly GDP as of 1960Q1) and a prior covariance matrix. Coefficient priors are set to zero.

### Results

The estimation results are reported in the table below (with coefficient estimates and *t*-values by country). The coefficients on the indicator variables are positive and significant in most cases (absolute *t*-values in excess of 2 at the 5% level). In some cases, the coefficients on the lagged indicator variables are negative, suggesting some ‘adjustment’ for overshooting in the previous month. For the Eurozone as a whole all indicator variables are positive (the total elasticity of industrial output and retail sales is 0.6 and 0.33 respectively, and similar values are found for most other countries.

This elasticity is actually quite high considering that industrial output only covers some 10-20% of the economy, for example. But its correlation with GDP is higher than that. This probably also explains why the autoregressive term on the lagged GDP is strongly negative. In most countries (Belgium excluded) this is the case. The AR(1) term serves as an adjustment mechanism that offsets the higher volatility in the underlying indicators. In other words, past errors are quickly reversed and spread out in following months.

To get an idea of the ‘fit’ of our model, we adopt the approach by Issler and Notini. The R^{2} measure tells us how much of the variance in monthly GDP can be explained by the model. We calculate ‘smoothed’ R^{2} by approximated by taking the variance of changes in the smoothed monthly GDP estimate, divided by the sum of the variance of changes in the smoothed monthly GDP estimate and the variance of error term:

In the chart (figure 11) below we show the monthly estimates for Eurozone GDP. They are based on the smoothed monthly GDP estimates from the Kalman filter (aka *y*_{t} or the *sv1* vector). As can be seen the monthly data are more volatile (as was to be expected). By design, the series follows the quarterly GDP series quite closely, as the 3m running sum of the monthly series exactly equals the quarterly series at the end of each quarter.

Given the slight edginess of the monthly series, it may be advisable to smooth the monthly series with, for example, a 2 or 3-month moving average. And as the (fairly random) snapshot in figure 12 shows, such a series may still give an idea of where things are heading during the quarter.

## Footnotes

[1] As parameters we used a turn phase of 5 months, a phase of 6 months, a minimum cycle of 13 months and a threshold of 10.4 as suggested in the literature; this is described in more detail in appendix A.

[2] Note that due to the use of a log scale, loss percentages can exceed 100%

[3] Note that the CEPR data start in 1970, whereas our analysis starts in 1960

[4] D. Harding and A. Pagan (2002), “Dissecting the cycle: a methodological investigation”, Journal of Monetary Economics, Elsevier, vol. 49(2), pp. 365-381

[5] Indeed, if two countries did not experience any recessions at all during the sample period, this concordance statistic would be exactly 1, but this is not extremely informative.

[6] We used the recession index variable shown in figure 5 and we took into account 6 lags for the observations. The Granger causality test tells us whether there is an unidirectional relation between two variables; this is based on the idea that the past can help forecast the future, but not the other way around. So if lagged observations of variable x do not significantly influence variable y, but lagged observations of y do have a significant impact on variable x then y is said to be Granger causing x.

[7] One possibility is that there is a third factor causing this. A global or US recession is a possible candidate in this case. So to test for this, we also ran the tests by including US recessions as a control variable. The results of this exercise did not significantly alter the results.

[8] See for an application to Belgium: V. Bodart, K.A. Kholodilin, F. Shadman-Mehta, “Dating and forecasting the Belgian business cycle”, IRES, Université catholique de Louvain, October 2003.

[9] The monthly and quarterly dataset, including monthly GDP estimates is available upon request.

[10] Joao Victor Issler and Hilton Notini, “Estimating Brazilian Monthly GDP: A State-Space Aproach”, Revista Brasileira de Economia, March 2016

[11] Hendry, D. F., “Dynamic Econometrics”, Oxford University Press, February 1995

[12] See also: James Mitchel, Richard T. Smith, Martin R. Weale, Stephen Wright and Eduardo L. Salazar, “An Indicator of Monthly GDP and an Early Estimate of Quarterly GDP Growth”, The Economic Journal, Vol. 115, February 2005

[13] Issler and Notini set these values at zero and then impose that the fictional observed data has a very large variance so that the zero value is discounted and overwritten by the Kalman-filter technique. But our regression package automatically took care of this problem.