Introduction to bvartools

Franz X. Mohr

2022-01-21

Introduction

The package bvartools implements functions for Bayesian inference of linear vector autoregressive (VAR) models. It separates a typical BVAR analysis workflow into multiple steps:

In each step researchers can fine-tune a model according to their specific requirements or they can simply use the default framework for commonly used models and priors. Since version 0.1.0 the package comes with posterior simulation functions that do not require to implement any further simulation algorithms. For Bayesian inference of stationary VAR models the package covers

For Bayesian inference of cointegrated VAR models the package implements the algorithm of Koop, León-González and Strachan (2010) [KLS] – which places identification restrictions on the cointegration space – in the following variants

For Bayesian inference of dynamic factor models the package implements the althorithm used in the textbook of Chan, Koop, Poirer and Tobias (2019).

This introduction to bvartools provides the code to set up and estimate a basic Bayesian VAR (BVAR) model.1 The first part covers a basic workflow, where the standard posterior simulation algorithm of the package is employed for Bayesian inference. The second part presents a workflow for a posterior algorithm as it could be implemented by a researcher.

For both illustrations the data set E1 from Lütkepohl (2006) is used. It contains data on West German fixed investment, disposable income and consumption expenditures in billions of DM from 1960Q1 to 1982Q4. Like in the textbook only the first 73 observations of the log-differenced series are used.

library(bvartools)
#> Loading required package: coda

# Load data
data("e1")
e1 <- diff(log(e1)) * 100

# Reduce number of oberservations
e1 <- window(e1, end = c(1978, 4))

# Plot the series
plot(e1)

Using bvartools with built-in algorithms

Setting up a model

The gen_var function produces an object, which contains information on the specification of the VAR model that should be estimated. The following code specifies a VAR(2) model with an intercept term. The number of iterations and burn-in draws is already specified at this stage.

model <- gen_var(e1, p = 2, deterministic = "const",
                 iterations = 5000, burnin = 1000)

Note that the function is also capable of generating more than one model. For example, specifying p = 0:2 would result in three models.

Adding model priors

Function add_priors produces priors for the specified model(s) in object model and augments the object accordingly.

model_with_priors <- add_priors(model,
                                coef = list(v_i = 0, v_i_det = 0),
                                sigma = list(df = 1, scale = .0001))

If researchers want to fine-tune individual prior specifications, this can be done by directly accessing the respective elements in object model_with_priors.

Obtaining posterior draws

Function draw_posterior can be used to produce posterior draws for a model.

bvar_est <- draw_posterior(model_with_priors)
#> Estimating model...

If researchers prefer to use their own posterior algorithms, this can be done by specifying argument FUN with a function that uses obejct model_with_priors as its input. Its output is an object of class bvar (see below).

If multiple models should be estimated, the function allows to make use of parallel computing by specifying argument mc.cores.

Inspect posterior draws

Posterior draws can be visually inspected by using the plot function. By default, it produces a series of histograms of all estimated coefficients.

plot(bvar_est)

Alternatively, the trace plot of the post-burnin draws can be draws by adding the argument type = "trace":

plot(bvar_est, type = "trace")

Summary statistics

Summary statistics can be obtained in the usual way using the summary method.

summary(bvar_est)
#> 
#> Bayesian VAR model with p = 2 
#> 
#> Model:
#> 
#> y ~ invest.01 + income.01 + cons.01 + invest.02 + income.02 + cons.02 + const
#> 
#> Variable: invest 
#> 
#>              Mean     SD Naive SD Time-series SD    2.5%     50%    97.5%
#> invest.01 -0.3216 0.1283 0.001815       0.001815 -0.5751 -0.3211 -0.06651
#> income.01  0.1390 0.5530 0.007821       0.007821 -0.9322  0.1363  1.23726
#> cons.01    0.9678 0.6854 0.009694       0.010108 -0.3489  0.9675  2.34195
#> invest.02 -0.1624 0.1273 0.001800       0.001751 -0.4196 -0.1598  0.08542
#> income.02  0.1162 0.5407 0.007647       0.007647 -0.9573  0.1170  1.16412
#> cons.02    0.9319 0.6793 0.009607       0.009607 -0.4051  0.9197  2.30012
#> const     -1.6592 1.7658 0.024972       0.024972 -5.1398 -1.6401  1.81714
#> 
#> Variable: income 
#> 
#>               Mean      SD  Naive SD Time-series SD     2.5%      50%  97.5%
#> invest.01  0.04342 0.03235 0.0004576      0.0003990 -0.01951  0.04348 0.1076
#> income.01 -0.15258 0.14124 0.0019975      0.0020544 -0.42742 -0.15229 0.1229
#> cons.01    0.28881 0.17361 0.0024552      0.0024552 -0.04710  0.29005 0.6263
#> invest.02  0.05029 0.03228 0.0004566      0.0004566 -0.01394  0.05049 0.1132
#> income.02  0.01989 0.13814 0.0019536      0.0019079 -0.24668  0.01829 0.2864
#> cons.02   -0.01247 0.17329 0.0024506      0.0024506 -0.36046 -0.01362 0.3144
#> const      1.58298 0.45525 0.0064382      0.0064186  0.68750  1.58338 2.4586
#> 
#> Variable: cons 
#> 
#>                Mean      SD  Naive SD Time-series SD      2.5%       50%
#> invest.01 -0.002953 0.02616 0.0003700      0.0003610 -0.054365 -0.003326
#> income.01  0.222228 0.11370 0.0016080      0.0016080 -0.004871  0.222644
#> cons.01   -0.264927 0.13755 0.0019453      0.0020244 -0.533225 -0.262222
#> invest.02  0.034017 0.02585 0.0003656      0.0003748 -0.016977  0.034325
#> income.02  0.356580 0.11114 0.0015717      0.0015717  0.138094  0.355388
#> cons.02   -0.025693 0.13967 0.0019752      0.0019752 -0.291399 -0.027642
#> const      1.305725 0.36262 0.0051283      0.0050137  0.579527  1.307317
#>             97.5%
#> invest.01 0.04831
#> income.01 0.44407
#> cons.01   0.00117
#> invest.02 0.08363
#> income.02 0.57322
#> cons.02   0.25201
#> const     2.01192
#> 
#> Variance-covariance matrix:
#> 
#>                  Mean     SD Naive SD Time-series SD    2.5%     50%  97.5%
#> invest_invest 22.2445 3.9698 0.056141       0.056451 15.6482 21.8988 30.851
#> invest_income  0.7413 0.7103 0.010045       0.010879 -0.6057  0.7059  2.199
#> invest_cons    1.2872 0.6110 0.008642       0.009155  0.2029  1.2489  2.598
#> income_invest  0.7413 0.7103 0.010045       0.010879 -0.6057  0.7059  2.199
#> income_income  1.4336 0.2654 0.003753       0.004090  1.0060  1.4044  2.022
#> income_cons    0.6420 0.1709 0.002416       0.002751  0.3567  0.6252  1.028
#> cons_invest    1.2872 0.6110 0.008642       0.009155  0.2029  1.2489  2.598
#> cons_income    0.6420 0.1709 0.002416       0.002751  0.3567  0.6252  1.028
#> cons_cons      0.9330 0.1687 0.002385       0.002619  0.6559  0.9134  1.326

As expected for an algrotihm with uninformative priors the posterior means are fairly close to the results of the frequentist estimator, which can be obtaind in the following way:

# Obtain data for LS estimator
y <- t(model$data$Y)
z <- t(model$data$Z)

# Calculate LS estimates
A_freq <- tcrossprod(y, z) %*% solve(tcrossprod(z))

# Round estimates and print
round(A_freq, 3)
#>        invest.01 income.01 cons.01 invest.02 income.02 cons.02  const
#> invest    -0.320     0.146   0.961    -0.161     0.115   0.934 -1.672
#> income     0.044    -0.153   0.289     0.050     0.019  -0.010  1.577
#> cons      -0.002     0.225  -0.264     0.034     0.355  -0.022  1.293

Thin results

The MCMC series in object est_bvar can be thinned using

bvar_est <- thin(bvar_est, thin = 10)

Forecasts

Forecasts with credible bands can be obtained with the function predict. If the model contains deterministic terms, new values can be provided in the argument new_d. If no values are provided, the function sets them to zero. The number of rows of new_d must be the same as the argument n.ahead.

bvar_pred <- predict(bvar_est, n.ahead = 10, new_d = rep(1, 10))

plot(bvar_pred)

Impulse response analysis

bvartools supports commonly used impulse response functions. See https://www.r-econometrics.com/timeseries/irf/ for an introduction.

Forecast error impulse response

FEIR <- irf(bvar_est, impulse = "income", response = "cons", n.ahead = 8)

plot(FEIR, main = "Forecast Error Impulse Response", xlab = "Period", ylab = "Response")

Orthogonalised impulse response

OIR <- irf(bvar_est, impulse = "income", response = "cons", n.ahead = 8, type = "oir")

plot(OIR, main = "Orthogonalised Impulse Response", xlab = "Period", ylab = "Response")

Generalised impulse response

GIR <- irf(bvar_est, impulse = "income", response = "cons", n.ahead = 8, type = "gir")

plot(GIR, main = "Generalised Impulse Response", xlab = "Period", ylab = "Response")

Variance decomposition

bvartools also supports forecast error variance decomposition (FEVD) and generalised forecast error variance decomposition.

Forecast error variance decomposition

bvar_fevd_oir <- fevd(bvar_est, response = "cons")

plot(bvar_fevd_oir, main = "OIR-based FEVD of consumption")

Generalised forecast error variance decomposition

It is also possible to calculate FEVDs, which are based on generalised impulse responses (GIR). Note that these do not automatically add up to unity. However, this could be changed by adding normalise_gir = TRUE to the function’s arguments.

bvar_fevd_gir <- fevd(bvar_est, response = "cons", type = "gir")

plot(bvar_fevd_gir, main = "GIR-based FEVD of consumption")

Using bvartools with user-written algorithms

bvartools was created to assist researchers in building and evaluating their own posterior simulation algorithms for linear BVAR models. Functions gen_var and add_priors simply help to quickly obtain the relevant data matrices for posterior simulation. Estimation can be done using algortihms, which are usually implemented by the researchers themselves. But once posterior draws are obtained bvartools can assist throughout in the following steps of the analysis. In this context the main contributions of the package are:

If researchers are willing to rely on the model generation and evaluation functions of bvartools, the only remaing step is to illustrate how user specific algorithms can be combined with the functional framework of the package. This is shown in the remainder of this introduction.

Model set-up and prior specifications

These steps are exactly the same as described above. Thus, the following Gibbs sampler departs from object model_with_priors from above.

A Gibbs sampler algorithm

# Reset random number generator for reproducibility
set.seed(1234567)

# Get data matrices
y <- t(model_with_priors$data$Y)
x <- t(model_with_priors$data$Z)

tt <- ncol(y) # Number of observations
k <- nrow(y) # Number of endogenous variables
m <- k * nrow(x) # Number of estimated coefficients

# Priors for coefficients
a_mu_prior <- model_with_priors$priors$coefficients$mu # Prior means
a_v_i_prior <- model_with_priors$priors$coefficients$v_i # Prior precisions

# Priors for error variance-covariance matrix
u_sigma_df_prior <- model_with_priors$priors$sigma$df # Prior degrees of freedom
u_sigma_scale_prior <- model_with_priors$priors$sigma$scale # Prior covariance matrix
u_sigma_df_post <- tt + u_sigma_df_prior # Posterior degrees of freedom

# Initial values for variance-covariance matrix
u_sigma <- diag(.00001, k)
u_sigma_i <- solve(u_sigma)

# Number of iterations of the Gibbs sampler
iterations <- model_with_priors$model$iterations 
# Number of burn-in draws
burnin <- model_with_priors$model$burnin
# Total number of draws
draws <- iterations + burnin

# Storate for posterior draws
draws_a <- matrix(NA, m, iterations)
draws_sigma <- matrix(NA, k^2, iterations)

# Start Gibbs sampler
for (draw in 1:draws) {
  
  # Draw conditional mean parameters
  a <- post_normal(y, x, u_sigma_i, a_mu_prior, a_v_i_prior)
  
  # Draw variance-covariance matrix
  u <- y - matrix(a, k) %*% x # Obtain residuals
  u_sigma_scale_post <- solve(u_sigma_scale_prior + tcrossprod(u))
  u_sigma_i <- matrix(rWishart(1, u_sigma_df_post, u_sigma_scale_post)[,, 1], k)
  
  # Store draws
  if (draw > burnin) {
    draws_a[, draw - burnin] <- a
    draws_sigma[, draw - burnin] <- solve(u_sigma_i) # Invert Sigma_i to obtain Sigma
  }
}

bvar objects

The bvar function can be used to collect relevant output of the Gibbs sampler into a standardised object, which can be used by functions such as predict to obtain forecasts or irf for impulse respons analysis.

bvar_est_two <- bvar(y = model_with_priors$data$Y,
                     x = model_with_priors$data$Z,
                     A = draws_a[1:18,],
                     C = draws_a[19:21, ],
                     Sigma = draws_sigma)

Since the output of function draw_posterior is an object of class bvar, the calculation of summary statistics, forecasts, impulse responses and forecast error variance decompositions is performed as described above.

summary(bvar_est_two)
#> 
#> Bayesian VAR model with p = 2 
#> 
#> Model:
#> 
#> y ~ invest.01 + income.01 + cons.01 + invest.02 + income.02 + cons.02 + const
#> 
#> Variable: invest 
#> 
#>              Mean     SD Naive SD Time-series SD    2.5%     50%    97.5%
#> invest.01 -0.3171 0.1287 0.001820       0.001781 -0.5727 -0.3161 -0.06312
#> income.01  0.1465 0.5596 0.007914       0.007914 -0.9499  0.1503  1.24997
#> cons.01    0.9576 0.6908 0.009770       0.009770 -0.3801  0.9519  2.30291
#> invest.02 -0.1599 0.1263 0.001787       0.001787 -0.4050 -0.1609  0.08937
#> income.02  0.1128 0.5458 0.007719       0.007719 -0.9591  0.1133  1.18511
#> cons.02    0.9202 0.6874 0.009721       0.009721 -0.4190  0.9364  2.28972
#> const     -1.6451 1.7712 0.025049       0.025049 -5.1131 -1.6536  1.82543
#> 
#> Variable: income 
#> 
#>               Mean      SD  Naive SD Time-series SD     2.5%      50%  97.5%
#> invest.01  0.04373 0.03269 0.0004623      0.0004623 -0.02191  0.04371 0.1069
#> income.01 -0.15152 0.14307 0.0020233      0.0019753 -0.43908 -0.14968 0.1253
#> cons.01    0.28606 0.17319 0.0024492      0.0024492 -0.04684  0.28727 0.6285
#> invest.02  0.05001 0.03239 0.0004580      0.0004580 -0.01478  0.04986 0.1129
#> income.02  0.02069 0.13831 0.0019559      0.0019559 -0.24879  0.01900 0.2964
#> cons.02   -0.01443 0.17287 0.0024448      0.0024448 -0.34814 -0.01354 0.3203
#> const      1.58537 0.45301 0.0064065      0.0064065  0.70491  1.58560 2.4734
#> 
#> Variable: cons 
#> 
#>                Mean      SD  Naive SD Time-series SD      2.5%       50%
#> invest.01 -0.003013 0.02659 0.0003761      0.0003761 -0.056220 -0.002736
#> income.01  0.223180 0.11491 0.0016250      0.0016250  0.001509  0.224370
#> cons.01   -0.262224 0.13898 0.0019655      0.0020287 -0.538874 -0.257616
#> invest.02  0.034337 0.02588 0.0003660      0.0003660 -0.018436  0.034896
#> income.02  0.355342 0.11217 0.0015863      0.0015863  0.133435  0.357047
#> cons.02   -0.025449 0.13903 0.0019662      0.0019662 -0.297178 -0.025842
#> const      1.297327 0.36338 0.0051389      0.0051389  0.607255  1.287345
#>              97.5%
#> invest.01 0.049540
#> income.01 0.448329
#> cons.01   0.002043
#> invest.02 0.084839
#> income.02 0.572082
#> cons.02   0.254113
#> const     2.013921
#> 
#> Variance-covariance matrix:
#> 
#>                  Mean     SD Naive SD Time-series SD    2.5%     50%  97.5%
#> invest_invest 22.3994 4.0777 0.057668       0.065232 15.9104 21.8679 31.819
#> invest_income  0.7388 0.7260 0.010267       0.011516 -0.6131  0.7156  2.268
#> invest_cons    1.2950 0.5977 0.008452       0.009363  0.2147  1.2703  2.534
#> income_invest  0.7388 0.7260 0.010267       0.011516 -0.6131  0.7156  2.268
#> income_income  1.4395 0.2677 0.003786       0.004205  1.0158  1.4042  2.057
#> income_cons    0.6422 0.1690 0.002391       0.002679  0.3537  0.6262  1.017
#> cons_invest    1.2950 0.5977 0.008452       0.009363  0.2147  1.2703  2.534
#> cons_income    0.6422 0.1690 0.002391       0.002679  0.3537  0.6262  1.017
#> cons_cons      0.9352 0.1678 0.002373       0.002649  0.6597  0.9183  1.318

References

Chan, J., Koop, G., Poirier, D. J., & Tobias, J. L. (2019). Bayesian Econometric Methods (2nd ed.). Cambridge: University Press.

Eddelbuettel, D., & Sanderson C. (2014). RcppArmadillo: Accelerating R with high-performance C++ linear algebra. Computational Statistics and Data Analysis, 71, 1054-1063. https://doi.org/10.1016/j.csda.2013.02.005

George, E. I., Sun, D., & Ni, S. (2008). Bayesian stochastic search for VAR model restrictions. Journal of Econometrics, 142(1), 553-580. https://doi.org/10.1016/j.jeconom.2007.08.017

Kim, S., Shephard, N., & Chib, S. (1998). Stochastic volatility: Likelihood inference and comparison with ARCH models. Review of Economic Studies 65(3), 361-396.

Koop, G., León-González, R., & Strachan R. W. (2010). Efficient posterior simulation for cointegrated models with priors on the cointegration space. Econometric Reviews, 29(2), 224-242. https://doi.org/10.1080/07474930903382208

Koop, G., León-González, R., & Strachan R. W. (2011). Bayesian inference in a time varying cointegration model. Journal of Econometrics, 165(2), 210-220. https://doi.org/10.1016/j.jeconom.2011.07.007

Koop, G., Pesaran, M. H., & Potter, S.M. (1996). Impulse response analysis in nonlinear multivariate models. Journal of Econometrics 74(1), 119-147. https://doi.org/10.1016/0304-4076(95)01753-4

Korobilis, D. (2013). VAR forecasting using Bayesian variable selection. Journal of Applied Econometrics, 28(2), 204-230. https://doi.org/10.1002/jae.1271

Lütkepohl, H. (2006). New introduction to multiple time series analysis (2nd ed.). Berlin: Springer.

Pesaran, H. H., & Shin, Y. (1998). Generalized impulse response analysis in linear multivariate models. Economics Letters, 58(1), 17-29. https://doi.org/10.1016/S0165-1765(97)00214-0

Sanderson, C., & Curtin, R. (2016). Armadillo: a template-based C++ library for linear algebra. Journal of Open Source Software, 1(2), 26. https://doi.org/10.21105/joss.00026


  1. Further examples about the use of the bvartools package are available at https://www.r-econometrics.com/timeseriesintro/.↩︎

  2. RcppArmadillo is the Rcpp bridge to the open source ‘Armadillo’ library of Sanderson and Curtin (2016).↩︎